Menü Überspringen
Contact
Deutsch
English
Not track
Data Protection
Search
Log in
DIPF News
Research
Infrastructures
Institute
Zurück
Contact
Deutsch
English
Not track
Data Protection
Search
Home
>
Research
>
Publications
>
Publications Data Base
Search results in the DIPF database of publications
Your query:
(Personen: "Kröhne," und "Ulf")
Advanced Search
Search term
Only Open Access
Search
Unselect matches
Select all matches
Export
40
items matching your search terms.
Show all details
Changes in the speed-ability relation through different treatments of rapid guessing
Deribo, Tobias; Goldhammer, Frank; Kröhne, Ulf
Journal Article
| In: Educational and Psychological Measurement | 2023
42903 Endnote
Author(s):
Deribo, Tobias; Goldhammer, Frank; Kröhne, Ulf
Title:
Changes in the speed-ability relation through different treatments of rapid guessing
In:
Educational and Psychological Measurement, 83 (2023) 3, S. 473-494
DOI:
10.1177/00131644221109490
URL:
https://journals.sagepub.com/doi/10.1177/00131644221109490
Publication Type:
3a. Beiträge in begutachteten Zeitschriften; Aufsatz (keine besondere Kategorie)
Language:
Englisch
Keywords:
Antwort; Deutschland; Empirische Untersuchung; Fertigkeit; Informations- und Kommunikationstechnologie; Item-Response-Theory; Leistungstest; Modell; Panel; Psychometrie; Reliabilität; Student; Test; Validität; Verhalten; Zeit
Abstract (english):
As researchers in the social sciences, we are often interested in studying not directly observable constructs through assessments and questionnaires. But even in a well-designed and well-implemented study, rapid-guessing behavior may occur. Under rapid-guessing behavior, a task is skimmed shortly but not read and engaged with in-depth. Hence, a response given under rapid-guessing behavior does bias constructs and relations of interest. Bias also appears reasonable for latent speed estimates obtained under rapid-guessing behavior, as well as the identified relation between speed and ability. This bias seems especially problematic considering that the relation between speed and ability has been shown to be able to improve precision in ability estimation. For this reason, we investigate if and how responses and response times obtained under rapid-guessing behavior affect the identified speed-ability relation and the precision of ability estimates in a joint model of speed and ability. Therefore, the study presents an empirical application that highlights a specific methodological problem resulting from rapid-guessing behavior. Here, we could show that different (non-)treatments of rapid guessing can lead to different conclusions about the underlying speed-ability relation. Furthermore, different rapid-guessing treatments led to wildly different conclusions about gains in precision through joint modeling. The results show the importance of taking rapid guessing into account when the psychometric use of response times is of interest. (DIPF/Orig.)
DIPF-Departments:
Lehr und Lernqualität in Bildungseinrichtungen
Rule-based process indicators of information processing explain performance differences in PIAAC […]
Hahnel, Carolin; Kröhne, Ulf; Goldhammer, Frank
Journal Article
| In: Large-scale Assessments in Education | 2023
43714 Endnote
Author(s):
Hahnel, Carolin; Kröhne, Ulf; Goldhammer, Frank
Title:
Rule-based process indicators of information processing explain performance differences in PIAAC web search tasks
In:
Large-scale Assessments in Education, 11 (2023) , S. 16
DOI:
10.1186/s40536-023-00169-5
URL:
https://largescaleassessmentsineducation.springeropen.com/articles/10.1186/s40536-023-00169-5
Publication Type:
3a. Beiträge in begutachteten Zeitschriften; Beitrag in Sonderheft
Language:
Englisch
Abstract:
Background: A priori assumptions about specific behavior in test items can be used to process log data in a rule-based fashion to identify the behavior of interest. In this study, we demonstrate such a top-down approach and created a process indicator to represent what type of information processing (flimsy, breadth-first, satisficing, sampling, laborious) adults exhibit when searching online for information. We examined how often the predefined patterns occurred for a particular task, how consistently they occurred within individuals, and whether they explained task success beyond individual background variables (age, educational attainment, gender) and information processing skills (reading and evaluation skills). Methods: We analyzed the result and log file data of ten countries that participated in the Programme for the International Assessment of Adult Competencies (PIAAC). The information processing behaviors were derived for two items that simulated a web search environment. Their explanatory value for task success was investigated with generalized linear mixed models. Results: The results showed item-specific differences in how frequently specific information processing patterns occurred, with a tendency of individuals not to settle on a single behavior across items. The patterns explained task success beyond reading and evaluation skills, with differences across items as to which patterns were most effective for solving a task correctly. The patterns even partially explained age-related differences. Conclusions: Rule-based process indicators have their strengths and weaknesses. Although dependent on the clarity and precision of a predefined rule, they allow for a targeted examination of behaviors of interest and can potentially support educational intervention during a test session. Concerning adults' digital competencies, our study suggests that the effective use of online information is not inherently based on demographic factors but mediated by central skills of lifelong learning and information processing strategies. (DIPF/Orig.)
DIPF-Departments:
Lehr und Lernqualität in Bildungseinrichtungen
Comparing the score interpretation across modes in PISA. An investigation of how item facets affect […]
Harrison, Scott; Kröhne, Ulf; Goldhammer, Frank; Lüdtke, Oliver; Robitzsch, Alexander
Journal Article
| In: Large-scale Assessments in Education | 2023
43732 Endnote
Author(s):
Harrison, Scott; Kröhne, Ulf; Goldhammer, Frank; Lüdtke, Oliver; Robitzsch, Alexander
Title:
Comparing the score interpretation across modes in PISA. An investigation of how item facets affect difficulty
In:
Large-scale Assessments in Education, 11 (2023) , S. 8
DOI:
10.1186/s40536-023-00157-9
URL:
https://largescaleassessmentsineducation.springeropen.com/articles/10.1186/s40536-023-00157-9
Publication Type:
3a. Beiträge in begutachteten Zeitschriften; Aufsatz (keine besondere Kategorie)
Language:
Englisch
Abstract (english):
Background : Mode effects, the variations in item and scale properties attributed to the mode of test administration (paper vs. computer), have stimulated research around test equivalence and trend estimation in PISA. The PISA assessment framework provides the backbone to the interpretation of the results of the PISA test scores. However, an identified gap in the current literature is whether mode effects have affected test score interpretation as defined by the assessment framework, and whether the interpretations of the PBA and CBA test scores are comparable. Methods : This study uses the 2015 PISA field trial data from thirteen countries to compare test modes through a construct representation approach. It is investigated whether item facets defined by the assessment framework (e.g., different cognitive demands) affect item difficulty comparably across modes using a unidimensional two-group generalized partial credit model (GPCM). Results : Linking the assessment framework to item difficulty using linear regression showed that for both maths and science domains, item categorisation relates to item difficulty, however for the reading domain no such conclusion was possible. In comparing PBA to CBA in representations across the three domains, maths had one facet with a significant difference in representation, reading had all three facets significantly different, and for science, four out of six facets had significant differences. Modelling items labelled "mode invariant" in PISA 2015, the results indicated that in every domain, two facets showed significant differences between the test modes. The graphical inspection of difficulty patterns confirmed that reading shows stronger differences while the patterns of the other domains were quite consistent between modes. Conclusions : The present study shows that the mode effects on difficulty vary within the task facets proposed by the PISA assessment framework, in particular for reading. These findings shed light on whether the comparability of score interpretation between modes is compromised. Given the limitations of the link between the reading domain and item difficulty, any conclusions in this domain are limited. Importantly, the present study adds a new approach and empirical findings to the investigation of the cross-mode equivalence in PISA domains. (DIPF/Orig.)
DIPF-Departments:
Lehr und Lernqualität in Bildungseinrichtungen
Digitales Lesen und papierbasiertes Lesen im nationalen Vergleich
Goldhammer, Frank; Hahnel, Carolin; Kröhne, Ulf; Frey, Andreas; Ludewig, Ulrich
Book Chapter
| Aus: McElvany, Nele; Lorenz, Ramona; Frey, Andreas; Goldhammer, Frank; Schilcher, Anita; Stubbe, Andreas C. (Hrsg.): IGLU 2021: Lesekompetenz von Grundschulkindern im internationalen Vergleich und im Trend über 20 Jahre | Münster: Waxmann | 2023
43958 Endnote
Author(s):
Goldhammer, Frank; Hahnel, Carolin; Kröhne, Ulf; Frey, Andreas; Ludewig, Ulrich
Title:
Digitales Lesen und papierbasiertes Lesen im nationalen Vergleich
In:
McElvany, Nele; Lorenz, Ramona; Frey, Andreas; Goldhammer, Frank; Schilcher, Anita; Stubbe, Andreas C. (Hrsg.): IGLU 2021: Lesekompetenz von Grundschulkindern im internationalen Vergleich und im Trend über 20 Jahre, Münster: Waxmann, 2023 , S. 89-109
URL:
https://www.waxmann.com/index.php?eID=download&buchnr=4700
Publication Type:
4. Beiträge in Sammelbänden; Sammelband (keine besondere Kategorie)
Language:
Deutsch
Keywords:
Bundesland; Deutschland; Digitale Medien; Grundschüler; Hypertext; IGLU <Internationale Grundschul-Lese-Untersuchung>; Leistungsmessung; Lesekompetenz; Leseverstehen; Printmedien; Schülerleistung; Schuljahr 04; Schwierigkeit; Testaufgabe; Testkonstruktion; Text; Veränderung; Vergleich; Wirkung
Abstract:
Das vorliegende Kapitel untersucht für Deutschland anhand verschiedener Kriterien, ob die Aufgaben, die gleichermaßen in digitalPIRLS und paperPIRLS vorgegeben wurden, das Leseverständnis vergleichbar messen. An einem Teil der PIRLS-Schulen bearbeitete dazu neben der vierten Klasse mit digitalPIRLS-Testheften eine weitere vierte Klasse entsprechende PIRLS-Aufgaben in gedruckten Testheften. (DIPF/Orig.)
DIPF-Departments:
Lehr und Lernqualität in Bildungseinrichtungen
Das Textverstehen von Studierenden beim Lesen multipler Dokumente und dessen Förderung
Schoor, Cornelia; Zink, Theresa; Mahlow, Nina; Hahnel, Carolin; Deribo, Tobias; Kröhne, Ulf; […]
Book Chapter
| Aus: Alker-Windbichler, Stefan; Kuhn, Axel; Lodes, Benedikt; Stocker, Günther (Hrsg.): Akademisches Lesen: Medien, Praktiken, Bibliotheken | Göttingen: V&R unipress | 2022
43079 Endnote
Author(s):
Schoor, Cornelia; Zink, Theresa; Mahlow, Nina; Hahnel, Carolin; Deribo, Tobias; Kröhne, Ulf; Goldhammer, Frank; Artelt, Cordula
Title:
Das Textverstehen von Studierenden beim Lesen multipler Dokumente und dessen Förderung
In:
Alker-Windbichler, Stefan; Kuhn, Axel; Lodes, Benedikt; Stocker, Günther (Hrsg.): Akademisches Lesen: Medien, Praktiken, Bibliotheken, Göttingen: V&R unipress, 2022 (Bibliothek im Kontext, 5), S. 57-85
DOI:
10.14220/9783737013970.57
URL:
https://www.vr-elibrary.de/doi/10.14220/9783737013970.57
Publication Type:
4. Beiträge in Sammelbänden; Sammelband (keine besondere Kategorie)
Language:
Deutsch
Abstract:
Akademisches Lesen erfordert von Studierenden häufig, mehrere Dokumente zu einem Thema zu verstehen und miteinander in Beziehung zu setzen. Viele Studierende beherrschen ein solches Lesen jedoch zumindest zu Beginn ihres Studiums nicht in ausreichendem Maße. Klassische Lesekompetenz kann sie bei der Bewältigung anstehender Anforderungen unterstützen, reicht aber oft allein nicht aus. Im vorliegenden Beitrag werden die Fähigkeit des Verstehens multipler Dokumente sowie Befunde zum Zusammenhang mit anderen Variablen dargestellt und Fördermöglichkeiten skizziert. Dabei wird insbesondere auf einen Ansatz eingegangen, in dem Studierenden eine Möglichkeit zur eigenständigen Leistungsüberprüfung (Self-Assessment) mit anschließendem Feedback und darauf abgestimmtem Fördermaterial angeboten wird. (DIPF/Orig.)
Abstract (english):
Academic reading often requires students to understand and relate multiple documents on a topic. However, many students have not mastered such academic reading to a sufficient degree, at least at the beginning of their studies. Classical reading skills can support them in coping with the demands at hand but are often not sufficient on their own. In this article, the ability to comprehend multiple documents as well as findings on its relationship to other variables are presented. Support options are outlined. In particular, an approach is discussed in which students are offered a self-assessment program with subsequent feedback and support material tailored to their needs. (DIPF/Orig.)
DIPF-Departments:
Lehr und Lernqualität in Bildungseinrichtungen
Simultaneous constrained adaptive item selection for group-based testing
Bengs, Daniel; Kröhne, Ulf; Brefeld, Ulf
Journal Article
| In: Journal of Educational Measurement | 2021
40702 Endnote
Author(s):
Bengs, Daniel; Kröhne, Ulf; Brefeld, Ulf
Title:
Simultaneous constrained adaptive item selection for group-based testing
In:
Journal of Educational Measurement, 58 (2021) 2, S. 236-261
DOI:
10.1111/jedm.12285
URL:
https://onlinelibrary.wiley.com/doi/abs/10.1111/jedm.12285
Publication Type:
3a. Beiträge in begutachteten Zeitschriften; Aufsatz (keine besondere Kategorie)
Language:
Englisch
Keywords:
Adaptives Testen; Aufgabe; Auswahl; Computerunterstütztes Verfahren; Empirische Untersuchung; Gruppe; Leistungsmessung; Modell; Simulation; Technologiebasiertes Testen; Test
Abstract (english):
By tailoring test forms to the test‐taker's proficiency, Computerized Adaptive Testing (CAT) enables substantial increases in testing efficiency over fixed forms testing. When used for formative assessment, the alignment of task difficulty with proficiency increases the chance that teachers can derive useful feedback from assessment data. The application of CAT to formative assessment in the classroom, however, is hindered by the large number of different items used for the whole class; the required familiarization with a large number of test items puts a significant burden on teachers. An improved CAT procedure for group‐based testing is presented, which uses simultaneous automated test assembly to impose a limit on the number of items used per group. The proposed linear model for simultaneous adaptive item selection allows for full adaptivity and the accommodation of constraints on test content. The effectiveness of the group‐based CAT is demonstrated with real‐world items in a simulated adaptive test of 3,000 groups of test‐takers, under different assumptions on group composition. Results show that the group‐based CAT maintained the efficiency of CAT, while a reduction in the number of used items by one half to two‐thirds was achieved, depending on the within‐group variance of proficiencies.
DIPF-Departments:
Bildungsqualität und Evaluation
Model‐based treatment of rapid guessing
Deribo, Tobias; Kröhne, Ulf; Goldhammer, Frank
Journal Article
| In: Journal of Educational Measurement | 2021
41271 Endnote
Author(s):
Deribo, Tobias; Kröhne, Ulf; Goldhammer, Frank
Title:
Model‐based treatment of rapid guessing
In:
Journal of Educational Measurement, 58 (2021) 2, S. 281-303
DOI:
10.1111/jedm.12290
URL:
https://onlinelibrary.wiley.com/doi/10.1111/jedm.12290?af=R
Publication Type:
3a. Beiträge in begutachteten Zeitschriften; Aufsatz (keine besondere Kategorie)
Language:
Englisch
Keywords:
Leistungstest; Testkonstruktion; Messverfahren; Computerunterstütztes Verfahren; Frage; Antwort; Verhalten; Dauer; Problemlösen; Modell; Student; Medienkompetenz; Item-Response-Theory; Multiple-Choice-Verfahren; Validität; Panel; Längsschnittuntersuchung
Abstract (english):
The increased availability of time-related information as a result of computer-based assessment has enabled new ways to measure test-taking engagement. One of these ways is to distinguish between solution and rapid guessing behavior. Prior research has recommended response-level filtering to deal with rapid guessing. Response-level filtering can lead to parameter bias if rapid guessing depends on the measured trait or (un-)observed covariates. Therefore, a model based on Mislevy and Wu (1996) was applied to investigate the assumption of ignorable missing data underlying response-level filtering. The model allowed us to investigate different approaches to treating response-level filtered responses in a single framework through model parameterization. The study found that lower-ability test-takers tend to rapidly guess more frequently and are more likely to be unable to solve an item they guessed on, indicating a violation of the assumption of ignorable missing data underlying response-level filtering. Further ability estimation seemed sensitive to different approaches to treating response-level filtered responses. Moreover, model-based approaches exhibited better model fit and higher convergent validity evidence compared to more naïve treatments of rapid guessing. The results illustrate the need to thoroughly investigate the assumptions underlying specific treatments of rapid guessing as well as the need for robust methods. (DIPF/Orig.)
DIPF-Departments:
Lehr und Lernqualität in Bildungseinrichtungen
Rapid guessing rates across administration mode and test setting
Kröhne, Ulf; Deribo, Tobias; Goldhammer, Frank
Journal Article
| In: Psychological Test and Assessment Modeling | 2020
40317 Endnote
Author(s):
Kröhne, Ulf; Deribo, Tobias; Goldhammer, Frank
Title:
Rapid guessing rates across administration mode and test setting
In:
Psychological Test and Assessment Modeling, 62 (2020) 2, S. 144-177
DOI:
10.25656/01:23630
URN:
urn:nbn:de:0111-pedocs-236307
URL:
https://nbn-resolving.org/urn:nbn:de:0111-pedocs-236307
Publication Type:
3a. Beiträge in begutachteten Zeitschriften; Beitrag in Sonderheft
Language:
Englisch
Keywords:
Test; Bewertung; Innovation; Validität; Technologiebasiertes Testen; Design; Testkonstruktion; Testverfahren; Wirkung; Verhalten; Logdatei; Experiment; Student; Vergleichsuntersuchung
Abstract (english):
Rapid guessing can threaten measurement invariance and the validity of large-scale assessments, which are often conducted under low-stakes conditions. Comparing measures collected under different administration modes or in different test settings necessitates that rapid guessing rates also be comparable. Response time thresholds can be used to identify rapid guessing behavior. Using data from an experiment embedded in an assessment of university students as part of the National Educational Panel Study (NEPS), we show that rapid guessing rates can differ across modes. Specifically, rapid guessing rates are found to be higher for un-proctored individual online assessment. It is also shown that rapid guessing rates differ across different groups of students and are related to properties of the test design. No relationship between dropout behavior and rapid guessing rates was found. (DIPF/Orig.)
DIPF-Departments:
Bildungsqualität und Evaluation
Reanalysis of the German PISA data. A comparison of different approaches for trend estimation with […]
Robitzsch, Alexander; Lüdtke, Oliver; Goldhammer, Frank; Kröhne, Ulf; Köller, Olaf
Journal Article
| In: Frontiers in Psychology | 2020
40319 Endnote
Author(s):
Robitzsch, Alexander; Lüdtke, Oliver; Goldhammer, Frank; Kröhne, Ulf; Köller, Olaf
Title:
Reanalysis of the German PISA data. A comparison of different approaches for trend estimation with a particular emphasis on mode effects
In:
Frontiers in Psychology, (2020) , S. 11:884
DOI:
10.3389/fpsyg.2020.00884
URN:
urn:nbn:de:0111-pedocs-232269
URL:
https://nbn-resolving.org/urn:nbn:de:0111-pedocs-232269
Publication Type:
3a. Beiträge in begutachteten Zeitschriften; Aufsatz (keine besondere Kategorie)
Language:
Englisch
Keywords:
PISA <Programme for International Student Assessment>; Test; Verfahren; Skalierung; Methode; Technologiebasiertes Testen; Veränderung; Entwicklung; Wirkungsforschung; Deutschland
Abstract:
International large-scale assessments, such as the Program for International Student Assessment (PISA), are conducted to provide information on the effectiveness of education systems. In PISA, the target population of 15-year-old students is assessed every 3 years. Trends show whether competencies have changed in the countries between PISA cycles. In order to provide valid trend estimates, it is desirable to retain the same test conditions and statistical methods in all PISA cycles. In PISA 2015, however, the test mode changed from paper-based to computer-based tests, and the scaling method was changed. In this paper, we investigate the effects of these changes on trend estimation in PISA using German data from all PISA cycles (2000-2015). Our findings suggest that the change from paper-based to computer-based tests could have a severe impact on trend estimation but that the change of the scaling model did not substantially change the trend estimates.
DIPF-Departments:
Bildungsqualität und Evaluation
Computerbasiertes Assessment
Goldhammer, Frank; Kröhne, Ulf
Book Chapter
| Aus: Moosbrugger, Helfried; Kelava, Augustin (Hrsg.): Testtheorie und Fragebogenkonstruktion | Berlin: Springer | 2020
40530 Endnote
Author(s):
Goldhammer, Frank; Kröhne, Ulf
Title:
Computerbasiertes Assessment
In:
Moosbrugger, Helfried; Kelava, Augustin (Hrsg.): Testtheorie und Fragebogenkonstruktion, Berlin: Springer, 2020 , S. 119-141
DOI:
10.1007/978-3-662-61532-4_6
URL:
https://link.springer.com/chapter/10.1007/978-3-662-61532-4_6
Publication Type:
4. Beiträge in Sammelwerken; Sammelband (keine besondere Kategorie)
Language:
Deutsch
Keywords:
Datenanalyse; Technologiebasiertes Testen; Computerunterstütztes Verfahren; Testverfahren; Psychologische Forschung; Sozialwissenschaften; Definition; Beispiel; Konzeption; Antwort; Datenerfassung; Interaktion; Bewertung; Testkonstruktion; Evidenz; Software; Fragebogen; Messverfahren
Abstract:
Das Kapitel gibt einen Überblick, wie mit Hilfe von Computern im weiteren Sinne Tests und Fragebogen realisiert und dabei die Möglichkeiten von klassischen Papier-und-Bleistift-Verfahren erweitert bzw. deutlich überschritten werden können. Dies betrifft beispielsweise die Entwicklung computerbasierter Items mit innovativen Antwortformaten und multimedialen Stimuli sowie die automatische Bewertung des gezeigten Antwortverhaltens. Des Weiteren ermöglicht der Computer eine flexiblere Testzusammenstellung, d. h., Items können automatisch unter Berücksichtigung inhaltlicher und statistischer Kriterien sequenziert werden. Das Kapitel behandelt außerdem die Frage, wie durch Logfiledaten das Analysepotential gesteigert und durch die automatische und zeitnahe Rückmeldung von Testdaten beispielsweise das Lernen unterstützt werden kann. Das Kapitel schließt mit Hinweisen auf einschlägige und frei zugängliche Softwarelösungen für Assessmentzwecke. (DIPF/Orig.)
DIPF-Departments:
Bildungsqualität und Evaluation
Unselect matches
Select all matches
Export
1
2
3
4
>
Show all
(40)