Menü Überspringen
Kontakt
Presse
Deutsch
English
Not track
Datenverarbeitung
Suche
Anmelden
DIPF aktuell
Forschung
Infrastrukturen
Institut
Zurück
Kontakt
Presse
Deutsch
English
Not track
Datenverarbeitung
Suche
Startseite
>
Forschung
>
Publikationen
>
Publikationendatenbank
Ergebnis der Suche in der DIPF Publikationendatenbank
Ihre Abfrage:
(Schlagwörter: "Multiple-Choice-Verfahren")
zur erweiterten Suche
Suchbegriff
Nur Open Access
Suchen
Markierungen aufheben
Alle Treffer markieren
Export
4
Inhalte gefunden
Alle Details anzeigen
Model‐based treatment of rapid guessing
Deribo, Tobias; Kröhne, Ulf; Goldhammer, Frank
Zeitschriftenbeitrag
| In: Journal of Educational Measurement | 2021
41271 Endnote
Autor*innen:
Deribo, Tobias; Kröhne, Ulf; Goldhammer, Frank
Titel:
Model‐based treatment of rapid guessing
In:
Journal of Educational Measurement, 58 (2021) 2, S. 281-303
DOI:
10.1111/jedm.12290
URL:
https://onlinelibrary.wiley.com/doi/10.1111/jedm.12290?af=R
Dokumenttyp:
3a. Beiträge in begutachteten Zeitschriften; Aufsatz (keine besondere Kategorie)
Sprache:
Englisch
Schlagwörter:
Leistungstest; Testkonstruktion; Messverfahren; Computerunterstütztes Verfahren; Frage; Antwort; Verhalten; Dauer; Problemlösen; Modell; Student; Medienkompetenz; Item-Response-Theory; Multiple-Choice-Verfahren; Validität; Panel; Längsschnittuntersuchung
Abstract (english):
The increased availability of time-related information as a result of computer-based assessment has enabled new ways to measure test-taking engagement. One of these ways is to distinguish between solution and rapid guessing behavior. Prior research has recommended response-level filtering to deal with rapid guessing. Response-level filtering can lead to parameter bias if rapid guessing depends on the measured trait or (un-)observed covariates. Therefore, a model based on Mislevy and Wu (1996) was applied to investigate the assumption of ignorable missing data underlying response-level filtering. The model allowed us to investigate different approaches to treating response-level filtered responses in a single framework through model parameterization. The study found that lower-ability test-takers tend to rapidly guess more frequently and are more likely to be unable to solve an item they guessed on, indicating a violation of the assumption of ignorable missing data underlying response-level filtering. Further ability estimation seemed sensitive to different approaches to treating response-level filtered responses. Moreover, model-based approaches exhibited better model fit and higher convergent validity evidence compared to more naïve treatments of rapid guessing. The results illustrate the need to thoroughly investigate the assumptions underlying specific treatments of rapid guessing as well as the need for robust methods. (DIPF/Orig.)
DIPF-Abteilung:
Lehr und Lernqualität in Bildungseinrichtungen
Diagnostik von ICT-Literacy. Multiple-Choice- vs. simulationsbasierte Aufgaben
Goldhammer, Frank; Kröhne, Ulf; Keßel, Yvonne; Senkbeil, Martin; Ihme, Jan Marten
Zeitschriftenbeitrag
| In: Diagnostica | 2014
34277 Endnote
Autor*innen:
Goldhammer, Frank; Kröhne, Ulf; Keßel, Yvonne; Senkbeil, Martin; Ihme, Jan Marten
Titel:
Diagnostik von ICT-Literacy. Multiple-Choice- vs. simulationsbasierte Aufgaben
In:
Diagnostica, 60 (2014) 1, S. 10-21
DOI:
10.1026/0012-1924/a000113
URN:
urn:nbn:de:0111-pedocs-146050
URL:
http://nbn-resolving.org/urn:nbn:de:0111-pedocs-146050
Dokumenttyp:
3a. Beiträge in begutachteten Zeitschriften; Aufsatz (keine besondere Kategorie)
Sprache:
Deutsch
Schlagwörter:
Aufgabe; Deutschland; Diagnostik; Informationskompetenz; Informations- und Kommunikationstechnologie; Kompetenz; Multiple-Choice-Verfahren; Psychometrie; Schuljahr 09; Simulation; Test
Abstract:
ICT-Literacy legt eine performanzbasierte Erfassung nahe, also mithilfe von Testaufgaben, die interaktive (simulierte) Computerumgebungen präsentieren und eine Reaktion mittels Maus und/oder Tastatur erfordern. Dennoch kommen häufig Verfahren wie Selbstbeurteilungen oder papierbasierte Leistungstests zum Einsatz. Ziel der vorliegenden Studie war es, die psychometrischen Eigenschaften simulationsbasierter (SIM) Aufgaben mit den Eigenschaften inhaltlich paralleler Multiple-Choice (MC)-Aufgaben zu vergleichen, bei denen Screenshots als Stimulus verwendet werden. Die MC-Aufgaben, die im Rahmen der National Educational Panel Study (NEPS) entwickelt wurden, erfassen die Fähigkeit, digitale Informationen auszuwählen und abzurufen sowie grundlegende Operationen durchzuführen (Access). In einem Zufallsgruppendesign bearbeiteten 405 Jugendliche der Klassenstufe 9 die computerbasierten Access-Testitems entweder als MC-Aufgabe oder als SIM-Aufgabe sowie den simulationsbasierten Basic Computer Skills (BCS)-Test. Die Ergebnisse zeigen, dass sich die meisten MC-Aufgaben und SIM-Aufgaben hinsichtlich Schwierigkeit und Ladung unterscheiden. Übereinstimmende konvergente Validität wird durch vergleichbar hohe Korrelationen der beiden Testformen mit BCS angezeigt.
Abstract (english):
ICT literacy suggests a performance-based assessment by means of tasks presenting interactive (simulated) computer environments and requiring responses by means of mouse and/or keyboard. However, assessment procedures like self-ratings or paper-based performance measures are still commonly used. The present study compares the psychometric properties of simulation-based (SIM) tasks with parallel multiple-choice (MC) tasks that make use of screenshots of software applications. The MC tasks, developed for the National Educational Panel Study (NEPS), reflect the skill to select and retrieve digital information and to perform basic operations (access). In a random groups design, 405 grade 9 students completed the computer-based access items as MC tasks or as SIM tasks as well as the simulation-based Basic Computer Skills (BCS) test. Results show that the majority of MC tasks and SIM tasks differ in difficulty and loading. Consistent convergent validity is indicated by comparably high correlations of the two test forms with BCS.
DIPF-Abteilung:
Bildungsqualität und Evaluation
Solving open-domain multiple choice questions with textual entailment and text similarity measures
Dhruva, Neil; Ferschke, Oliver; Gurevych, Iryna
Sammelbandbeitrag
| Aus: Cappellato, Linda;Ferro, Nicola;Halvey, Martin;Kraaij, Wessel (Hrsg.): CLEF2014 Working Notes: Working Notes for the CLEF 2014 Conference, Sheffield, UK, September 15-18, 2014 | Aachen: RWTH | 2014
34977 Endnote
Autor*innen:
Dhruva, Neil; Ferschke, Oliver; Gurevych, Iryna
Titel:
Solving open-domain multiple choice questions with textual entailment and text similarity measures
Aus:
Cappellato, Linda;Ferro, Nicola;Halvey, Martin;Kraaij, Wessel (Hrsg.): CLEF2014 Working Notes: Working Notes for the CLEF 2014 Conference, Sheffield, UK, September 15-18, 2014, Aachen: RWTH, 2014 (Workshop Proceedings, 1180), S. 1375-1385
URN:
urn:nbn:de:0074-1180-0
URL:
http://ceur-ws.org/Vol-1180/CLEF2014wn-QA-DhruvaEt2014.pdf
Dokumenttyp:
4. Beiträge in Sammelbänden; Tagungsband/Konferenzbeitrag/Proceedings
Sprache:
Englisch
Schlagwörter:
Antwort; Computerlinguistik; Computerunterstütztes Verfahren; Frage; Leseverstehen; Multiple-Choice-Verfahren; Textverständnis
Abstract:
In this paper, we present a system for automatically answering open-domain, multiple choice reading comprehension questions about short English narrative texts. The system is based on state-of-the-art text similarity measures, textual entailment metrics and coreference resolution and does not make use of any additional domain specific background knowledge. Each answer option is scored with a combination of all evaluation metrics and ranked according to their overall score in order to determine the most likely correct answer. Our best configuration achieved the second highest score across all competing system in the entrance exam grading challenge with a c@1 score of 0.375. (DIPF/Orig.)
DIPF-Abteilung:
Informationszentrum Bildung
Multiple-choice versus open-ended response formats of reading test items: A two-dimensional IRT […]
Rauch, Dominique; Hartig, Johannes
Zeitschriftenbeitrag
| In: Psychological Test and Assessment Modeling | 2010
32468 Endnote
Autor*innen:
Rauch, Dominique; Hartig, Johannes
Titel:
Multiple-choice versus open-ended response formats of reading test items: A two-dimensional IRT analysis
In:
Psychological Test and Assessment Modeling, 52 (2010) 4, S. 354-379
URL:
http://www.psychologie-aktuell.com/fileadmin/download/ptam/4-2010_20101218/02_Rauch.pdf
Dokumenttyp:
3a. Beiträge in begutachteten Zeitschriften; Aufsatz (keine besondere Kategorie)
Sprache:
Englisch
Schlagwörter:
Bildungsforschung; Datenanalyse; Deutschland; Experimentelle Psychologie; Itemanalyse; Item-Response-Theory; Leistungsmessung; Lesefertigkeit; Lesetest; Leseverstehen; Mehrebenenanalyse; Messverfahren; Multiple-Choice-Verfahren; Textverständnis
Abstract (english):
The dimensionality of a reading comprehension assessment with non-stem equivalent multiplechoice (MC) items and open-ended (OE) items was analyzed with German test data of 8523 9th-graders. We found that a two-dimensional IRT model with within-item multidimensionality, where MC and OE items load on a general latent dimension and OE items additionally load on a nested latent dimension, had a superior fit compared to an unidimensional model (p .05). Correlations between general cognitive abilities, orthography and vocabulary and the general latent dimension were significantly higher than with the nested latent dimension (p .05). Drawing back on experimental studies on the effect of item format on reading processes, we suppose that the general latent dimension measures abilities necessary to master basic reading processes and the nested latent dimension captures abilities necessary to master higher reading processes. Including gender, language spoken at home, and school track as predictors in latent regression models showed that the well known advantage of girls and mother-tongue students is found only for the nested latent dimension.
DIPF-Abteilung:
Bildungsqualität und Evaluation
Markierungen aufheben
Alle Treffer markieren
Export