Menü Überspringen
Contact
Deutsch
English
Not track
Data Protection
Search
Log in
DIPF News
Research
Infrastructures
Institute
Zurück
Contact
Deutsch
English
Not track
Data Protection
Search
Home
>
Research
>
Publications
>
Publications Data Base
Search results in the DIPF database of publications
Your query:
(Personen: "Buerger," und "Sarah")
Advanced Search
Search term
Only Open Access
Search
Unselect matches
Select all matches
Export
2
items matching your search terms.
Show all details
What makes the difference? The impact of item properties on mode effects in reading assessments
Buerger, Sarah; Kroehne, Ulf; Köhler, Carmen; Goldhammer, Frank
Journal Article
| In: Studies in Educational Evaluation | 2019
39232 Endnote
Author(s):
Buerger, Sarah; Kroehne, Ulf; Köhler, Carmen; Goldhammer, Frank
Title:
What makes the difference? The impact of item properties on mode effects in reading assessments
In:
Studies in Educational Evaluation, 62 (2019) , S. 1-9
DOI:
10.1016/j.stueduc.2019.04.005
URL:
https://www.sciencedirect.com/science/article/abs/pii/S0191491X18302141
Publication Type:
Zeitschriftenbeiträge; Zeitschriftenbeiträge
Language:
Englisch
Abstract:
The transition from paper-based assessment (PBA) to computer-based assessment (CBA) requires mode effect studies to investigate the comparability of scores across modes. In the National Educational Panel Study experimental studies were conducted to investigate psychometric differences between modes. In the present study, the cross-mode equivalence of a reading test was examined. The investigation sought to determine whether mode effects can be explained by item properties. The results showed that splitting texts between multiple screens, did not affect comparability. However, item difficulty was increased in CBA when items in the first and second position of a unit were not presented on the same double-page as in PBA. Regarding response formats, assignment tasks on the computer requiring the use of combo boxes were more difficult than on paper, while no difference was found for multiple choice items. (DIPF/Orig.)
DIPF-Departments:
Bildungsqualität und Evaluation
Construct equivalence of PISA reading comprehension measured with paper‐based and computer‐based […]
Kroehne, Ulf; Buerger, Sarah; Hahnel, Carolin; Goldhammer, Frank
Journal Article
| In: Educational Measurement | 2019
39814 Endnote
Author(s):
Kroehne, Ulf; Buerger, Sarah; Hahnel, Carolin; Goldhammer, Frank
Title:
Construct equivalence of PISA reading comprehension measured with paper‐based and computer‐based assessments
In:
Educational Measurement, 38 (2019) 3, S. 97-111
DOI:
10.1111/emip.12280
URL:
https://onlinelibrary.wiley.com/doi/abs/10.1111/emip.12280
Publication Type:
Zeitschriftenbeiträge; Zeitschriftenbeiträge
Language:
Englisch
Keywords:
Einflussfaktor; Schülerleistung; Frage; Antwort; Interaktion; Unterschied; Vergleich; Item-Response-Theory; Deutschland; PISA <Programme for International Student Assessment>; Leseverstehen; Messverfahren; Testkonstruktion; Korrelation; Äquivalenz; Papier-Bleistift-Test; Computerunterstütztes Verfahren; Technologiebasiertes Testen; Leistungsmessung; Testverfahren; Testdurchführung
Abstract:
For many years, reading comprehension in the Programme for International Student Assessment (PISA) was measured via paper‐based assessment (PBA). In the 2015 cycle, computer‐based assessment (CBA) was introduced, raising the question of whether central equivalence criteria required for a valid interpretation of the results are fulfilled. As an extension of the PISA 2012 main study in Germany, a random subsample of two intact PISA reading clusters, either computerized or paper‐based, was assessed using a random group design with an additional within‐subject variation. The results are in line with the hypothesis of construct equivalence. That is, the latent cross‐mode correlation of PISA reading comprehension was not significantly different from the expected correlation between the two clusters. Significant mode effects on item difficulties were observed for a small number of items only. Interindividual differences found in mode effects were negatively correlated with reading comprehension, but were not predicted by basic computer skills or gender. Further differences between modes were found with respect to the number of missing values.
Abstract (english):
For many years, reading comprehension in the Programme for International Student Assessment (PISA) was measured via paper‐based assessment (PBA). In the 2015 cycle, computer‐based assessment (CBA) was introduced, raising the question of whether central equivalence criteria required for a valid interpretation of the results are fulfilled. As an extension of the PISA 2012 main study in Germany, a random subsample of two intact PISA reading clusters, either computerized or paper‐based, was assessed using a random group design with an additional within‐subject variation. The results are in line with the hypothesis of construct equivalence. That is, the latent cross‐mode correlation of PISA reading comprehension was not significantly different from the expected correlation between the two clusters. Significant mode effects on item difficulties were observed for a small number of items only. Interindividual differences found in mode effects were negatively correlated with reading comprehension, but were not predicted by basic computer skills or gender. Further differences between modes were found with respect to the number of missing values.
DIPF-Departments:
Bildungsqualität und Evaluation
Unselect matches
Select all matches
Export