Menü Überspringen
Kontakt
Presse
Deutsch
English
Not track
Datenverarbeitung
Suche
Anmelden
DIPF aktuell
Forschung
Infrastrukturen
Institut
Zurück
Kontakt
Presse
Deutsch
English
Not track
Datenverarbeitung
Suche
Startseite
>
Forschung
>
Publikationen
>
Publikationendatenbank
Ergebnis der Suche in der DIPF Publikationendatenbank
Ihre Abfrage:
(Personen: "Buerger," und "Sarah")
zur erweiterten Suche
Suchbegriff
Nur Open Access
Suchen
Markierungen aufheben
Alle Treffer markieren
Export
2
Inhalte gefunden
Alle Details anzeigen
What makes the difference? The impact of item properties on mode effects in reading assessments
Buerger, Sarah; Kroehne, Ulf; Köhler, Carmen; Goldhammer, Frank
Zeitschriftenbeitrag
| In: Studies in Educational Evaluation | 2019
39232 Endnote
Autor*innen:
Buerger, Sarah; Kroehne, Ulf; Köhler, Carmen; Goldhammer, Frank
Titel:
What makes the difference? The impact of item properties on mode effects in reading assessments
In:
Studies in Educational Evaluation, 62 (2019) , S. 1-9
DOI:
10.1016/j.stueduc.2019.04.005
URL:
https://www.sciencedirect.com/science/article/abs/pii/S0191491X18302141
Dokumenttyp:
Zeitschriftenbeiträge; Zeitschriftenbeiträge
Sprache:
Englisch
Abstract:
The transition from paper-based assessment (PBA) to computer-based assessment (CBA) requires mode effect studies to investigate the comparability of scores across modes. In the National Educational Panel Study experimental studies were conducted to investigate psychometric differences between modes. In the present study, the cross-mode equivalence of a reading test was examined. The investigation sought to determine whether mode effects can be explained by item properties. The results showed that splitting texts between multiple screens, did not affect comparability. However, item difficulty was increased in CBA when items in the first and second position of a unit were not presented on the same double-page as in PBA. Regarding response formats, assignment tasks on the computer requiring the use of combo boxes were more difficult than on paper, while no difference was found for multiple choice items. (DIPF/Orig.)
DIPF-Abteilung:
Bildungsqualität und Evaluation
Construct equivalence of PISA reading comprehension measured with paper‐based and computer‐based […]
Kroehne, Ulf; Buerger, Sarah; Hahnel, Carolin; Goldhammer, Frank
Zeitschriftenbeitrag
| In: Educational Measurement | 2019
39814 Endnote
Autor*innen:
Kroehne, Ulf; Buerger, Sarah; Hahnel, Carolin; Goldhammer, Frank
Titel:
Construct equivalence of PISA reading comprehension measured with paper‐based and computer‐based assessments
In:
Educational Measurement, 38 (2019) 3, S. 97-111
DOI:
10.1111/emip.12280
URL:
https://onlinelibrary.wiley.com/doi/abs/10.1111/emip.12280
Dokumenttyp:
Zeitschriftenbeiträge; Zeitschriftenbeiträge
Sprache:
Englisch
Schlagwörter:
Einflussfaktor; Schülerleistung; Frage; Antwort; Interaktion; Unterschied; Vergleich; Item-Response-Theory; Deutschland; PISA <Programme for International Student Assessment>; Leseverstehen; Messverfahren; Testkonstruktion; Korrelation; Äquivalenz; Papier-Bleistift-Test; Computerunterstütztes Verfahren; Technologiebasiertes Testen; Leistungsmessung; Testverfahren; Testdurchführung
Abstract:
For many years, reading comprehension in the Programme for International Student Assessment (PISA) was measured via paper‐based assessment (PBA). In the 2015 cycle, computer‐based assessment (CBA) was introduced, raising the question of whether central equivalence criteria required for a valid interpretation of the results are fulfilled. As an extension of the PISA 2012 main study in Germany, a random subsample of two intact PISA reading clusters, either computerized or paper‐based, was assessed using a random group design with an additional within‐subject variation. The results are in line with the hypothesis of construct equivalence. That is, the latent cross‐mode correlation of PISA reading comprehension was not significantly different from the expected correlation between the two clusters. Significant mode effects on item difficulties were observed for a small number of items only. Interindividual differences found in mode effects were negatively correlated with reading comprehension, but were not predicted by basic computer skills or gender. Further differences between modes were found with respect to the number of missing values.
Abstract (english):
For many years, reading comprehension in the Programme for International Student Assessment (PISA) was measured via paper‐based assessment (PBA). In the 2015 cycle, computer‐based assessment (CBA) was introduced, raising the question of whether central equivalence criteria required for a valid interpretation of the results are fulfilled. As an extension of the PISA 2012 main study in Germany, a random subsample of two intact PISA reading clusters, either computerized or paper‐based, was assessed using a random group design with an additional within‐subject variation. The results are in line with the hypothesis of construct equivalence. That is, the latent cross‐mode correlation of PISA reading comprehension was not significantly different from the expected correlation between the two clusters. Significant mode effects on item difficulties were observed for a small number of items only. Interindividual differences found in mode effects were negatively correlated with reading comprehension, but were not predicted by basic computer skills or gender. Further differences between modes were found with respect to the number of missing values.
DIPF-Abteilung:
Bildungsqualität und Evaluation
Markierungen aufheben
Alle Treffer markieren
Export