Menü Überspringen
Contact
Deutsch
English
Not track
Data Protection
Search
Log in
DIPF News
Research
Infrastructures
Institute
Zurück
Contact
Deutsch
English
Not track
Data Protection
Search
Home
>
Research
>
Publications
>
Publications Data Base
Search results in the DIPF database of publications
Your query:
(Schlagwörter: "Adaptives Testen")
Advanced Search
Search term
Only Open Access
Search
Unselect matches
Select all matches
Export
18
items matching your search terms.
Show all details
Simultaneous constrained adaptive item selection for group-based testing
Bengs, Daniel; Kröhne, Ulf; Brefeld, Ulf
Journal Article
| In: Journal of Educational Measurement | 2021
40702 Endnote
Author(s):
Bengs, Daniel; Kröhne, Ulf; Brefeld, Ulf
Title:
Simultaneous constrained adaptive item selection for group-based testing
In:
Journal of Educational Measurement, 58 (2021) 2, S. 236-261
DOI:
10.1111/jedm.12285
URL:
https://onlinelibrary.wiley.com/doi/abs/10.1111/jedm.12285
Publication Type:
3a. Beiträge in begutachteten Zeitschriften; Aufsatz (keine besondere Kategorie)
Language:
Englisch
Keywords:
Adaptives Testen; Aufgabe; Auswahl; Computerunterstütztes Verfahren; Empirische Untersuchung; Gruppe; Leistungsmessung; Modell; Simulation; Technologiebasiertes Testen; Test
Abstract (english):
By tailoring test forms to the test‐taker's proficiency, Computerized Adaptive Testing (CAT) enables substantial increases in testing efficiency over fixed forms testing. When used for formative assessment, the alignment of task difficulty with proficiency increases the chance that teachers can derive useful feedback from assessment data. The application of CAT to formative assessment in the classroom, however, is hindered by the large number of different items used for the whole class; the required familiarization with a large number of test items puts a significant burden on teachers. An improved CAT procedure for group‐based testing is presented, which uses simultaneous automated test assembly to impose a limit on the number of items used per group. The proposed linear model for simultaneous adaptive item selection allows for full adaptivity and the accommodation of constraints on test content. The effectiveness of the group‐based CAT is demonstrated with real‐world items in a simulated adaptive test of 3,000 groups of test‐takers, under different assumptions on group composition. Results show that the group‐based CAT maintained the efficiency of CAT, while a reduction in the number of used items by one half to two‐thirds was achieved, depending on the within‐group variance of proficiencies.
DIPF-Departments:
Bildungsqualität und Evaluation
Methodological challenges of international student assessment
Frey, Andreas; Hartig, Johannes
Book Chapter
| Aus: Harju-Luukkainen, Heidi; McElvany, Nele; Stang, Justine (Hrsg.): Monitoring student achievement in the 21st century: European policy perspectives and assessment strategies | Cham: Springer | 2020
40524 Endnote
Author(s):
Frey, Andreas; Hartig, Johannes
Title:
Methodological challenges of international student assessment
In:
Harju-Luukkainen, Heidi; McElvany, Nele; Stang, Justine (Hrsg.): Monitoring student achievement in the 21st century: European policy perspectives and assessment strategies, Cham: Springer, 2020 , S. 39-49
DOI:
10.1007/978-3-030-38969-7_4
URL:
https://link.springer.com/chapter/10.1007/978-3-030-38969-7_4
Publication Type:
4. Beiträge in Sammelwerken; Sammelband (keine besondere Kategorie)
Language:
Englisch
Keywords:
Schülerleistungstest; Leistungsmessung; Internationaler Vergleich; Methodologie; Herausforderung; Veränderung; Schülerleistung; Heterogenität; Adaptives Testen; Befragung; Daten; Open Science; Validität
Abstract (english):
International large-scale assessments are very successful. One key factor of this success is their rigorous methodological and psychometric basis. Because education systems worldwide are subject to rapid changes, international large-scale assessments need to evolve as well. We describe five current methodological challenges that should be addressed so that large-scale assessments can continue to provide highly useful information on educational outcomes in the future. First, new or changed constructs should be adopted, and constructs with declining importance should be dropped from the assessments. Second, the heterogeneity of student performance within and between countries should be better accounted for. This can be achieved by completing the introduction of computerized adaptive testing into international large-scale assessments and making full use of computers to optimise the testing and scaling process. Third, more analytical effort should be invested in the measurement and modelling of context variables, mainly by applying latent variable.
DIPF-Departments:
Bildungsqualität und Evaluation
Usability design and evaluation for a formative assessment feedback
Horn, Florian; Schiffner, Daniel; Gattinger, Thorsten; Sacher, Patrick
Book Chapter
| Aus: Zender, Raphael; Ifenthaler, Dirk; Leonhardt, Thiemo; Schumacher, Clara (Hrsg.): DELFI 2020 - die 18. Fachtagung Bildungstechnologien der Gesellschaft für Informatik e. V., 14.-18. September 2020, online | Bonn: Gesellschaft für Informatik | 2020
40266 Endnote
Author(s):
Horn, Florian; Schiffner, Daniel; Gattinger, Thorsten; Sacher, Patrick
Title:
Usability design and evaluation for a formative assessment feedback
In:
Zender, Raphael; Ifenthaler, Dirk; Leonhardt, Thiemo; Schumacher, Clara (Hrsg.): DELFI 2020 - die 18. Fachtagung Bildungstechnologien der Gesellschaft für Informatik e. V., 14.-18. September 2020, online, Bonn: Gesellschaft für Informatik, 2020 (Lecture Notes in Informatics, P-308), S. 121-126
URL:
https://dl.gi.de/handle/20.500.12116/34149
Publication Type:
4. Beiträge in Sammelwerken; Tagungsband/Konferenzbeitrag/Proceedings
Language:
Englisch
Keywords:
Adaptives Testen; Formative Evaluation; Nutzerverhalten; Benutzerfreundlichkeit; Feedback; Bewertung; Student; Hochschulunterricht; Programmierung; Deutschland
Abstract (english):
In a current research project we implemented an approach for delivering computer assisted adaptive testing as an additional form of formative assessment for a lecture. We primarily used these tests as a formative assessment during a "fundamentals of computer programming" course. The tests also included an individual feedback to further guide the students. To evaluate and improve the formative assessments, we performed a usability study, which was focused on the user satisfaction while taking a test and reading the feedback. The study used the user experience questionnaire. The results indicate that an adaptive assessment can provide more support, but also shows shortcomings of the current implementation. (DIPF/Orig.)
DIPF-Departments:
Informationszentrum Bildung
Adaptive item selection under matroid constraints
Bengs, Daniel; Brefeld, Ulf; Kröhne, Ulf
Journal Article
| In: Journal of Computerized Adaptive Testing | 2018
38642 Endnote
Author(s):
Bengs, Daniel; Brefeld, Ulf; Kröhne, Ulf
Title:
Adaptive item selection under matroid constraints
In:
Journal of Computerized Adaptive Testing, 6 (2018) 2, S. 15-36
DOI:
10.7333/1808-0602015
URN:
urn:nbn:de:0111-dipfdocs-166953
URL:
http://www.dipfdocs.de/volltexte/2020/16695/pdf/JCAT_2018_2_Bengs_Brefeld_Kroehne_Adaptive_item_selection_under_matroid_constraints_A.pdf
Publication Type:
3a. Beiträge in begutachteten Zeitschriften; Aufsatz (keine besondere Kategorie)
Language:
Englisch
Keywords:
Adaptives Testen; Algorithmus; Computerunterstütztes Verfahren; Itembank; Messverfahren; Technologiebasiertes Testen; Testkonstruktion
Abstract (english):
The shadow testing approach (STA; van der Linden & Reese, 1998) is considered the state of the art in constrained item selection for computerized adaptive tests. The present paper shows that certain types of constraints (e.g., bounds on categorical item attributes) induce a matroid on the item bank. This observation is used to devise item selection algorithms that are based on matroid optimization and lead to optimal tests, as the STA does. In particular, a single matroid constraint can be treated optimally by an efficient greedy algorithm that selects the most informative item preserving the integrity of the constraints. A simulation study shows that for applicable constraints, the optimal algorithms realize a decrease in standard error (SE) corresponding to a reduction in test length of up to 10% compared to the maximum priority index (Cheng & Chang, 2009) and up to 30% compared to Kingsbury and Zara's (1991) constrained computerized adaptive testing.
DIPF-Departments:
Bildungsqualität und Evaluation
Competence assessment in education. Research, models and instruments
Leutner, Detlev; Fleischer, Jens; Grünkorn, Juliane; Klieme, Eckhard (Hrsg.)
Compilation Book
| Cham: Springer | 2017
36882 Endnote
Editor(s)
Leutner, Detlev; Fleischer, Jens; Grünkorn, Juliane; Klieme, Eckhard
Title:
Competence assessment in education. Research, models and instruments
Published:
Cham: Springer, 2017 (Methodology of educational measurement and assessment)
DOI:
10.1007/978-3-319-50030-0
Publication Type:
2. Herausgeberschaft; Sammelband (keine besondere Kategorie)
Language:
Englisch
Keywords:
Bildungsforschung; Empirische Forschung; Deutschland; Österreich; Schweiz; Luxemburg; Diagnostik; Lernmittel; Bild; Text; Erwachsenenbildung; Berufsausbildung; Physik; Entscheidung; Nachhaltige Entwicklung; Metakognition; Sekundarstufe I; Problemlösen; Schülerleistung; Technologiebasiertes Testen; Psychometrie; Adaptives Testen; Feedback; Kompetenz; Bewertung; Schüler; Kompetenzerwerb; Modellierung; Primarbereich; Geografie; Literatur; Naturwissenschaftliche Kompetenz; Selbstgesteuertes Lernen; Lehrer; Berufliche Kompetenz; Lehrerausbildung; Pädagogik; Professionalität; Fachwissen; Schulwahl; Empfehlung
Abstract (english):
This book addresses challenges in the theoretically and empirically adequate assessment of competencies in educational settings. It presents the scientific projects of the priority program "Competence Models for Assessing Individual Learning Outcomes and Evaluating Educational Processes," which focused on competence assessment across disciplines in Germany. The six-year program coordinated 30 research projects involving experts from the fields of psychology, educational science, and subject-specific didactics. The main reference point for all projects is the concept of "competencies," which are defined as "context-specific cognitive dispositions that are acquired and needed to successfully cope with certain situations or tasks in specific domains" (Koeppen et al., 2008, p. 62). The projects investigate different aspects of competence assessment: The primary focus lies on the development of cognitive models of competencies, complemented by the construction of psychometric models based on these theoretical models. In turn, the psychometric models constitute the basis for the construction of instruments for effectively measuring competencies. The assessment of competencies plays a key role in optimizing educational processes and improving the effectiveness of educational systems. This book contributes to this challenging endeavor by meeting the need for more integrative, interdisciplinary research on the structure, levels, and development of competencies. (DIPF/Orig.)
DIPF-Departments:
Bildungsqualität und Evaluation
Multidimensional adaptive measurement of competences
Frey, Andreas; Kröhne, Ulf; Seitz, Nicki-Nils; Born, Sebastian
Book Chapter
| Aus: Leutner, Detlev;Fleischer, Jens;Grünkorn, Juliane;Klieme, Eckhard (Hrsg.): Competence assessment in education: Research, models and instruments | Cham: Springer | 2017
37125 Endnote
Author(s):
Frey, Andreas; Kröhne, Ulf; Seitz, Nicki-Nils; Born, Sebastian
Title:
Multidimensional adaptive measurement of competences
In:
Leutner, Detlev;Fleischer, Jens;Grünkorn, Juliane;Klieme, Eckhard (Hrsg.): Competence assessment in education: Research, models and instruments, Cham: Springer, 2017 (Methodology of educational measurement and assessment), S. 369-387
DOI:
10.1007/978-3-319-50030-0_22
Publication Type:
4. Beiträge in Sammelwerken; Sammelband (keine besondere Kategorie)
Language:
Englisch
Keywords:
Adaptives Testen; Computerunterstütztes Verfahren; Item-Response-Theory; Test; Software; Simulation
Abstract:
Even though multidimensional adaptive testing (MAT) is advantageous in the measurement of complex competencies, operational applications are still rare. In an attempt to change this situation, this chapter presents four recent developments that foster the applicability of MAT. First, in a simulation study, we show that multiple constraints can be accounted for in MAT without a loss of measurement precision, by using the multidimensional maximum priority index method. Second, the results from another simulation study show that the high efficiency of MAT is mainly due to the fact that MAT considers prior information in the final ability estimation, and not to the fact that MAT uses prior information for item selection. Third, the multidimensional adaptive testing environment is presented. This software can be used to assemble, configure, and apply multidimensional adaptive tests. Last, the application of the software is illustrated for unidimensional and multidimensional adaptive tests. The application of MAT is especially recommended for large-scale assessments of student achievement. (DIPF/Orig.)
DIPF-Departments:
Bildungsqualität und Evaluation
Development, validation and application of a competence model for mathematical problem solving by […]
Leuders, Timo; Bruder, Regina; Kröhne, Ulf; Naccarella, Dominik; Nitsch, Renate; […]
Book Chapter
| Aus: Leutner, Detlev;Fleischer, Jens;Grünkorn, Juliane;Klieme, Eckhard (Hrsg.): Competence assessment in education: Research, models and instruments | Cham: Springer | 2017
37127 Endnote
Author(s):
Leuders, Timo; Bruder, Regina; Kröhne, Ulf; Naccarella, Dominik; Nitsch, Renate; Henning-Kahmann, Jan; Kelava, Augustin; Wirtz, Markus
Title:
Development, validation and application of a competence model for mathematical problem solving by using and translating representations of functions
In:
Leutner, Detlev;Fleischer, Jens;Grünkorn, Juliane;Klieme, Eckhard (Hrsg.): Competence assessment in education: Research, models and instruments, Cham: Springer, 2017 (Methodology of educational measurement and assessment), S. 389-406
DOI:
10.1007/978-3-319-50030-0_23
Publication Type:
4. Beiträge in Sammelwerken; Sammelband (keine besondere Kategorie)
Language:
Englisch
Keywords:
Adaptives Testen; Computerunterstütztes Verfahren; Mathematikunterricht; Problemlösen; Kompetenz; Modell; Mathematische Kompetenz; Psychometrie
Abstract:
In mathematics education, the student's ability to translate between different representations of functions is regarded as a key competence for mastering situations that can be described by mathematical functions. Students are supposed to interpret common representations like numerical tables (N), function graphs (G), verbally or pictorially represented situations (S), and algebraic expressions (A). In a multi-step project (1) a theoretical competence model was constructed by identifying key processes and key dimensions and corresponding item pools, (2) different psychometric models assuming theory-based concurrent competence structures were tested empirically, and (3) finally, a computerized adaptive assessment tool was developed and applied in school practice. (DIPF/Orig.)
DIPF-Departments:
Bildungsqualität und Evaluation
Computergestützte, adaptive und verhaltensnahe Erfassung informations- und […]
Wenzel, S. Franziska C.; Engelhardt, Lena; Hartig, Katja; Kuchta, Kathrin; Frey, Andreas; […]
Book Chapter
| Aus: Bundesministerium für Bildung und Forschung (Hrsg.): Forschungsvorhaben in Ankopplung an Large-Scale-Assessments | Berlin: Bundesministerium für Bildung und Forschung | 2016
36592 Endnote
Author(s):
Wenzel, S. Franziska C.; Engelhardt, Lena; Hartig, Katja; Kuchta, Kathrin; Frey, Andreas; Goldhammer, Frank; Naumann, Johannes; Forz, Holger
Title:
Computergestützte, adaptive und verhaltensnahe Erfassung informations- und kommunikations-technologiebezogener Fertigkeiten (ICT-Skills) (CavE-ICT)
In:
Bundesministerium für Bildung und Forschung (Hrsg.): Forschungsvorhaben in Ankopplung an Large-Scale-Assessments, Berlin: Bundesministerium für Bildung und Forschung, 2016 (Bildungsforschung, 44), S. 161-180
URL:
https://www.bmbf.de/pub/Bildungsforschung_Band_44.pdf#page=163
Publication Type:
4. Beiträge in Sammelwerken; Sammelband (keine besondere Kategorie)
Language:
Deutsch
Keywords:
Adaptives Testen; Computerunterstütztes Verfahren; Deutschland; Entwicklung; Fertigkeit; Informationstechnologische Bildung; Informations- und Kommunikationstechnologie; Messung; Projekt; Testkonstruktion
Abstract:
2011 startete das Bundesministerium für Bildung und Forschung (BMBF) die Initiative zur Förderung von Forschungsvorhaben in Ankopplung an Large-Scale-Assessments (LSA). Large-Scale-Assessments werden als wichtige Instrumente des Bildungsmonitorings angesehen (z.B. Sekretariat der Ständigen Konferenz der Kultusminister der Länder in der Bundesrepublik Deutschland [KMK], 2006). Mit der Ausschreibung der Förderinitiative werden Forschungsaktivitäten unterstützt, die neben Fragestellungen zur Kompetenzdiagnostik Testkonzeption und Methoden ständig zu verbessern suchen, weitere Domänen, Kompetenzen und Bedienungsfelder erschließen und in angemessener Weise modellieren wollen. Das Projekt "Computergestützte, adaptive und verhaltensnahe Erfassung informations- und kommunikationstechnologiebezogener Fertigkeiten (ICT-Skills)" (CavE-ICT) war in der genannten Initiative angesiedelt. Es widmete sich der Entwicklung eines computergestützten Testverfahrens zur simulationsbasierten Erfassung von informations- und kommunikationstechnologiebezogenen Fertigkeiten. Im vorliegenden Beitrag werden die zentralen Ergebnisse der Testentwicklung dargelegt. Der Einsatz des entwickelten Testinstruments kann im Kontext von Large-Scale-Assessments, aber auch zur Individualdiagnostik erfolgen. Der Test ist zudem potenziell adaptiv einsetzbar und beruht auf einer soliden theoretischen Basis. CavE-ICT war ein Kooperationsprojekt zwischen der Goethe-Universität Frankfurt am Main, dem Deutschen Institut für Internationale Pädagogische Forschung in Frankfurt am Main und der Friedrich-Schiller-Universität Jena. Die übergeordnete "Vision" des Projekts bestand in der Entwicklung eines computerbasierten, verhaltensnahen sowie adaptiven Instruments zur Messung von ICT-Skills, das bei künftigen (internationalen) Large-Scale-Assessments, wie etwa PISA, dem nationalen Bildungspanel (NEPS; Artelt, Weinert & Carstensen, 2013) oder der International Computer and Information Literacy Study (ICILS; Bos et al., 2014), Verwendung finden kann. Die Struktur des [...] Beitrags orientiert sich am Projektablauf. Zunächst wird die theoretische Rahmenkonzeption vorgestellt (Kapitel 2), auf deren Basis der computerisierte Test entwickelt wurde. Anschließend wird der Prozess der Itementwicklung skizziert (Kapitel 3) und schließlich die Erprobung der entwickelten Items im Rahmen der Kalibrierungsstudie dargestellt (Kapitel 4). Abschließend wird ein kurzes Fazit gezogen und ein Ausblick gegeben (Kapitel 5). (DIPF/Orig.)
DIPF-Departments:
Bildungsqualität und Evaluation
Computergestützte, adaptive und verhaltensnahe Erfassung Informations- und […]
Wenzel, S. Franziska C.; Engelhardt, Lena; Kuchta, Kathrin; Hartig, Katja; Naumann, Johannes; […]
Working Papers
| 2015
36844 Endnote
Author(s):
Wenzel, S. Franziska C.; Engelhardt, Lena; Kuchta, Kathrin; Hartig, Katja; Naumann, Johannes; Goldhammer, Frank; Frey, Andreas; Horz, Holger
Title:
Computergestützte, adaptive und verhaltensnahe Erfassung Informations- und Kommunikationstechnologie-bezogener Fähigkeiten (ICT-Skills) in PISA. Projekt CavE - ICT - PISA; Schlussbericht
Published:
Frankfurt am Main: Universitätsbibliothek; Technische Informationsbibliothek, 2015
DOI:
10.2314/GBV:838771823
URL:
http://edok01.tib.uni-hannover.de/edoks/e01fb15/838771823.pdf
Publication Type:
5. Arbeits- und Diskussionspapiere; Forschungsbericht/Projektberichte/Schulrückmeldungen
Language:
Deutsch
Keywords:
Adaptives Testen; Computerunterstütztes Verfahren; Fähigkeit; Forschungsprojekt; Informations- und Kommunikationstechnologie; Messverfahren; PISA <Programme for International Student Assessment>; Projektbericht; Skalierung; Testkonstruktion; Validität
Abstract:
Mit dem Projektvorhaben CavE-ICT-PISA sollten Konstrukte und Erhebungsverfahren im Kontext der internationalen PISA Option ICT Familiarity Questionnaire weiterentwickelt werden. Ziel dabei war es zunächst eine stärkere theoretische Verankerung für das Konstrukt ICT-Skills zu schaffen und darauf aufbauend eine verhaltensbasierten und computerisierten ICT-Test zu entwickeln und zu erproben. In diesem Sinne wurde zunächst ein theoretisches Framework erstellt, das auf Basis einer Synthese bestehender Frameworks dazu verwendet wurde Testitems zu entwickeln. Es wurden 70 funktionsfähige Items in einem umfangreichen Qualitätssicherungsprozess entwickelt, computerisiert und verhaltensbasiert in Simulationsumgebungen implementiert. Die entwickelten Items können mit einem Werkzeug in verschiedene Sprachen übersetzt werden und sind damit auch international einsetzbar. Die Testadministration erfolgte, unter Nutzung von USB-Sticks, an insgesamt 33 Schulen, deren IT-Ausstattung zuvor hinsichtlich spezifischer Leistungsanforderungen geprüft wurde. Die Stichprobe besteht aus 983 15jährigen Schülerinnen und Schülern. Auf Basis der Kalibrierungsergebnisse der ICT-Items wurden 64 Aufgaben für den endgültigen ICT-Test selektiert, welcher eine Reliabilität von .80 erreicht. Bei der Berechnung der Kompetenzverteilung wurden zudem latente Zusammenhänge mit Variablen wie Alter, Geschlecht oder Computernutzung, berücksichtigt. Eine verkürzte Version des ICT-Tests, wie auch ein adaptiver Testalgorithmus wurden unter Nutzung von Monte-Carlo-Simulationsstudien erprobt. Insgesamt kann festgehalten werden, dass im Rahmen von CavE-ICT-PISA innerhalb eines kurzen Zeitrahmens ein Testinstrument auf höchstem internationalem Standard konstruiert und zum Einsatz gebracht wurde. Der entstandene ICT-Test könnte auch als Kurzversion oder adaptiv, beispielsweise in computerbasierten Messungen als Kontrollvariable zum Einsatz kommen. (DIPF/Orig.)
Abstract (english):
Goal of the project CavE-ICT-PISA was to develop constructs and assessment instruments in the context of PISA further, based on the optional ICT Familiarity Questionnaire. The first partial goal aimed to corroborate the construct ICT-Skills more theoretically and to develop a behavior-related and computer-based test. With this in mind a theoretical framework concept was prepared based on a synthesis of existing frameworks including several feedback loops for optimization and implemented fully functional in simulation environment. The items can be translated in other languages, thus the test is international applicable. The test was delivered on USB flash drives at 33 German schools, after the schools were selected regarding their readiness of soft- and hardware. The sample consists of 983 15 year old students. Based on the study results, 64 items were selected for the final item pool, which has a reliability of .80. For the ability distribution, latent relations to variables like age, sex and computer experience were modeled. A shorter version of the test as well as an adaptive test algorithm was developed using Monte-Carlo-simulations. Thus, the goal of the project to develop and apply an assessment instrument in line with international standards was met. The developed test can be used as a short version or adaptive test für instance to control in computer-based assessments for ICT-skills. (DIPF/Orig.)
DIPF-Departments:
Bildungsqualität und Evaluation
Controlling individuals' time spent on task in speeded performance measures. Experimental time […]
Goldhammer, Frank; Kröhne, Ulf
Journal Article
| In: Applied Psychological Measurement | 2014
34278 Endnote
Author(s):
Goldhammer, Frank; Kröhne, Ulf
Title:
Controlling individuals' time spent on task in speeded performance measures. Experimental time limits, posterior time limits, and response time modeling
In:
Applied Psychological Measurement, 38 (2014) 4, S. 255-267
DOI:
10.1177/0146621613517164
URN:
urn:nbn:de:0111-pedocs-127839
URL:
https://nbn-resolving.org/urn:nbn:de:0111-pedocs-127839
Publication Type:
3a. Beiträge in begutachteten Zeitschriften; Aufsatz (keine besondere Kategorie)
Language:
Englisch
Keywords:
Adaptives Testen; Deutschland; Empirische Untersuchung; Item-Response-Theory; Leistungsdruck; Leistungsmessung; Modell; Reliabilität; Schüler; Schülerleistung; Schuljahr 12; Technologiebasiertes Testen; Testaufgabe; Validität; Zeit
Abstract:
The speed-ability trade-off becomes a measurement problem if there is between-subject variation in the speed-ability compromise, as this may affect the comparability of ability estimates. To control individual speed differences, the response-signal (RS) paradigm was applied requiring an immediate response as soon as an acoustic signal is presented. A figural discrimination task and a word recognition task were completed both in an untimed condition allowing individual differences in time spent on task and in several timed conditions where the time available for item completion was limited using the RS paradigm. Thus, speed was manipulated by varying the available time between stimulus-onset and RS. A total of N = 205 high school students participated in the study. Results showed that across timed conditions with decreasing time on task, the ability level and ability variance decreased substantially. Ability correlations between timed conditions were high, whereas correlations between untimed and timed conditions were low. This finding suggested that ability differences being inconsistent to those found in the timed condition are due to individual differences in time on task in the untimed condition. To eliminate these differences, two ways were considered. First, untimed responses were recoded using two-tailed posterior time limits. As expected, correlations between timed and untimed conditions were increased. Second, the log-transformed item response times were included in the item response model, which led to even higher correlations between timed and untimed conditions. Validity and generalizability of the proposed testing procedure are discussed.
DIPF-Departments:
Bildungsqualität und Evaluation
Unselect matches
Select all matches
Export
1
2
>
Show all
(18)