Menü Überspringen
Contact
Deutsch
English
Not track
Data Protection
Search
Log in
DIPF News
Research
Infrastructures
Institute
Zurück
Contact
Deutsch
English
Not track
Data Protection
Search
Home
>
Research
>
Publications
>
Publications Data Base
Search results in the DIPF database of publications
Your query:
(Schlagwörter: "Technologiebasiertes Testen")
Advanced Search
Search term
Only Open Access
Search
Unselect matches
Select all matches
Export
70
items matching your search terms.
Show all details
The role of reading skills in the evaluation of online information gathered from search engine […]
Hahnel, Carolin; Goldhammer, Frank; Kröhne, Ulf; Naumann, Johannes
Journal Article
| In: Computers in Human Behavior | 2018
37929 Endnote
Author(s):
Hahnel, Carolin; Goldhammer, Frank; Kröhne, Ulf; Naumann, Johannes
Title:
The role of reading skills in the evaluation of online information gathered from search engine environments
In:
Computers in Human Behavior, 78 (2018) , S. 223-234
DOI:
10.1016/j.chb.2017.10.004
URN:
urn:nbn:de:0111-dipfdocs-192242
URL:
http://www.dipfdocs.de/volltexte/2020/19224/pdf/CiHB_2018_Hahnel_et_al_The_role_of_reading_skills_in_the_evaluation_of_online_information_A.pdf
Publication Type:
3a. Beiträge in begutachteten Zeitschriften; Aufsatz (keine besondere Kategorie)
Language:
Englisch
Keywords:
Bewertung; Deutschland; Digitale Medien; Einflussfaktor; Empirische Untersuchung; Hypertext; Information; Internet; Lesekompetenz; Leseverhalten; Online; Relevanz; Schüler; Sekundarbereich; Suchmaschine; Technologiebasiertes Testen
Abstract (english):
A critical evaluation of results to find useful information is essential when doing a web search. In this study, we investigated the evaluation skills of secondary school students, based on their behavior in selecting links from a search engine result page (SERP). To clarify the role of reading when evaluating online information, we assessed students' individual reading skills on word, sentence, and text level. Data from 416 15-year-old students participating in a computer based German add-on study to the Programme for International Student Assessment (PISA) in 2012 were investigated. Using generalized linear mixed models (GLMMs), it was found that reading skills affected the ability to evaluate online information. These effects were influenced by the distinctiveness of information in relevance and students' navigation to subsequent SERPs or websites. The results are interpreted to show that skilled readers are able to allocate their cognitive resources more efficiently than less skilled readers when evaluating online information. Implications are discussed in terms of underlying cognitive processes when making web search decisions. (DIPF/Orig.)
DIPF-Departments:
Bildungsqualität und Evaluation
How to conceptualize, represent, and analyze log data from technology-based assessments? A generic […]
Kroehne, Ulf; Goldhammer, Frank
Journal Article
| In: Behaviormetrika | 2018
38895 Endnote
Author(s):
Kroehne, Ulf; Goldhammer, Frank
Title:
How to conceptualize, represent, and analyze log data from technology-based assessments? A generic framework and an application to questionnaire items
In:
Behaviormetrika, 45 (2018) 2, S. 527-563
DOI:
10.1007/s41237-018-0063-y
Publication Type:
3a. Beiträge in begutachteten Zeitschriften; Beitrag in Sonderheft
Language:
Englisch
Keywords:
Bildungsforschung; Empirische Forschung; Logdatei; Datenanalyse; Technologiebasiertes Testen; PISA <Programme for International Student Assessment>; Fragebogen; Konzeption; Testkonstruktion; Daten; Typologie; Hardware; Antwort; Verhalten; Dauer; Interaktion; Mensch-Maschine-Kommunikation; Indikator
Abstract:
Log data from educational assessments attract more and more attention and large-scale assessment programs have started providing log data as scientific use files. Such data generated as a by-product of computer-assisted data collection has been known as paradata in survey research. In this paper, we integrate log data from educational assessments into a taxonomy of paradata. To provide a generic framework for the analysis of log data, finite state machines are suggested. Beyond its computational value, the specific benefit of using finite state machines is achieved by separating platform-specific log events from the definition of indicators by states. Specifically, states represent filtered log data given a theoretical process model, and therefore, encode the information of log files selectively. The approach is empirically illustrated using log data of the context questionnaires of the Programme for International Student Assessment (PISA). We extracted item-level response time components from questionnaire items that were administered as item batteries with multiple questions on one screen and related them to the item responses. Finally, the taxonomy and the finite state machine approach are discussed with respect to the definition of complete log data, the verification of log data and the reproducibility of log data analyses. (DIPF/Orig.)
DIPF-Departments:
Bildungsqualität und Evaluation
Competence assessment in education. Research, models and instruments
Leutner, Detlev; Fleischer, Jens; Grünkorn, Juliane; Klieme, Eckhard (Hrsg.)
Compilation Book
| Cham: Springer | 2017
36882 Endnote
Editor(s)
Leutner, Detlev; Fleischer, Jens; Grünkorn, Juliane; Klieme, Eckhard
Title:
Competence assessment in education. Research, models and instruments
Published:
Cham: Springer, 2017 (Methodology of educational measurement and assessment)
DOI:
10.1007/978-3-319-50030-0
Publication Type:
2. Herausgeberschaft; Sammelband (keine besondere Kategorie)
Language:
Englisch
Keywords:
Bildungsforschung; Empirische Forschung; Deutschland; Österreich; Schweiz; Luxemburg; Diagnostik; Lernmittel; Bild; Text; Erwachsenenbildung; Berufsausbildung; Physik; Entscheidung; Nachhaltige Entwicklung; Metakognition; Sekundarstufe I; Problemlösen; Schülerleistung; Technologiebasiertes Testen; Psychometrie; Adaptives Testen; Feedback; Kompetenz; Bewertung; Schüler; Kompetenzerwerb; Modellierung; Primarbereich; Geografie; Literatur; Naturwissenschaftliche Kompetenz; Selbstgesteuertes Lernen; Lehrer; Berufliche Kompetenz; Lehrerausbildung; Pädagogik; Professionalität; Fachwissen; Schulwahl; Empfehlung
Abstract (english):
This book addresses challenges in the theoretically and empirically adequate assessment of competencies in educational settings. It presents the scientific projects of the priority program "Competence Models for Assessing Individual Learning Outcomes and Evaluating Educational Processes," which focused on competence assessment across disciplines in Germany. The six-year program coordinated 30 research projects involving experts from the fields of psychology, educational science, and subject-specific didactics. The main reference point for all projects is the concept of "competencies," which are defined as "context-specific cognitive dispositions that are acquired and needed to successfully cope with certain situations or tasks in specific domains" (Koeppen et al., 2008, p. 62). The projects investigate different aspects of competence assessment: The primary focus lies on the development of cognitive models of competencies, complemented by the construction of psychometric models based on these theoretical models. In turn, the psychometric models constitute the basis for the construction of instruments for effectively measuring competencies. The assessment of competencies plays a key role in optimizing educational processes and improving the effectiveness of educational systems. This book contributes to this challenging endeavor by meeting the need for more integrative, interdisciplinary research on the structure, levels, and development of competencies. (DIPF/Orig.)
DIPF-Departments:
Bildungsqualität und Evaluation
Conditioning factors of test-taking engagement in PIAAC. An exploratory IRT modelling approach […]
Goldhammer, Frank; Martens, Thomas; Lüdtke, Oliver
Journal Article
| In: Large-scale Assessments in Education | 2017
37971 Endnote
Author(s):
Goldhammer, Frank; Martens, Thomas; Lüdtke, Oliver
Title:
Conditioning factors of test-taking engagement in PIAAC. An exploratory IRT modelling approach considering person and item characteristics
In:
Large-scale Assessments in Education, 5 (2017) , S. 18
DOI:
10.1186/s40536-017-0051-9
URL:
https://largescaleassessmentsineducation.springeropen.com/articles/10.1186/s40536-017-0051-9
Publication Type:
3a. Beiträge in begutachteten Zeitschriften; Beitrag in Sonderheft
Language:
Englisch
Keywords:
Antwort; Einflussfaktor; Erwachsener; Item-Response-Theory; Kanada; Längsschnittuntersuchung; Leistungstest; Lesekompetenz; Mathematische Kompetenz; Messung; Motivation; PIAAC (Programme for the International Assessment of Adult Competencies); Problemlösen; Selbstkonzept; Technologiebasiertes Testen; Verhalten
Abstract:
Background: A potential problem of low-stakes large-scale assessments such as the Programme for the International Assessment of Adult Competencies (PIAAC) is low test-taking engagement. The present study pursued two goals in order to better understand conditioning factors of test-taking disengagement: First, a model-based approach was used to investigate whether item indicators of disengagement constitute a continuous latent person variable by domain. Second, the effects of person and item characteristics were jointly tested using explanatory item response models. Methods: Analyses were based on the Canadian sample of Round 1 of the PIAAC, with N = 26,683 participants completing test items in the domains of literacy, numeracy, and problem solving. Binary item disengagement indicators were created by means of item response time thresholds. Results: The results showed that disengagement indicators define a latent dimension by domain. Disengagement increased with lower educational attainment, lower cognitive skills, and when the test language was not the participant's native language. Gender did not exert any effect on disengagement, while age had a positive effect for problem solving only. An item's location in the second of two assessment modules was positively related to disengagement, as was item difficulty. The latter effect was negatively moderated by cognitive skill, suggesting that poor test-takers are especially likely to disengage with more difficult items. Conclusions: The negative effect of cognitive skill, the positive effect of item difficulty, and their negative interaction effect support the assumption that disengagement is the outcome of individual expectations about success (informed disengagement). (DIPF/Orig.)
DIPF-Departments:
Bildungsqualität und Evaluation
What to make of and how to interpret process data
Goldhammer, Frank; Zehner, Fabian
Journal Article
| In: Measurement: Interdisciplinary Research and Perspectives | 2017
38066 Endnote
Author(s):
Goldhammer, Frank; Zehner, Fabian
Title:
What to make of and how to interpret process data
In:
Measurement: Interdisciplinary Research and Perspectives, 15 (2017) 3/4, S. 128-132
DOI:
10.1080/15366367.2017.1411651
URN:
urn:nbn:de:0111-dipfdocs-192082
URL:
http://www.dipfdocs.de/volltexte/2020/19208/pdf/Measurement_2017_3-4_Goldhammer_Zehner_What_to_make_of_and_how_to_interpret_process_data_A.pdf
Publication Type:
3a. Beiträge in begutachteten Zeitschriften; Aufsatz (keine besondere Kategorie)
Language:
Englisch
Keywords:
Technologiebasiertes Testen; Kognitive Kompetenz; Testdurchführung; Kognitive Prozesse; Datenanalyse; Indikator; Interpretation; Validität
Abstract (english):
Maddox (2017) argues that respondents' talk and gesture during an assessment inform researchers how a response product has evolved. Indeed, how a task is performed represents key information for psychological and educational assessment. [...] Recently, process data has increasingly gained attention in cognitive ability testing given the digitalization of measurement and the possibility of exploiting log file data. [...] As shown by Maddox for large-scale assessments, even talk and gesture can be regarded as useful process data. In this case, the process data is not only video-recorded but also observed by the interviewer in situ; the interviewer interactively uses it to influence the test-taking process and to reduce construct-irrelevant variance. Thus, like product data (e.g., scores), process data is used to draw inferences. We argue in the following that the interpretation and use of process data and derived indicators require validation, just as product data do (Kane, 2013). This theoretical background, including some examples about log file data, sets the ground for our comments on Maddox's use of "talk and gesture as process data." (DIPF/Orig.)
DIPF-Departments:
Bildungsqualität und Evaluation
Processing of positive-causal and negative-causal coherence relations in primary school children […]
Knoepke, Julia; Richter, Tobias; Isberner, May-Britt; Naumann, Johannes; Neeb, Yvonne; […]
Journal Article
| In: Journal of Child Language | 2017
36656 Endnote
Author(s):
Knoepke, Julia; Richter, Tobias; Isberner, May-Britt; Naumann, Johannes; Neeb, Yvonne; Weinert, Sabine
Title:
Processing of positive-causal and negative-causal coherence relations in primary school children and adults. A test of the cumulative cognitive complexity approach in German
In:
Journal of Child Language, 44 (2017) 2, S. 297-328
DOI:
10.1017/S0305000915000872
URN:
urn:nbn:de:0111-dipfdocs-191627
URL:
http://www.dipfdocs.de/volltexte/2020/19162/pdf/J.Child_Lang_2017_2_Knoepke_et_al_Processing_of_positivecausal_and_negativecausal_coherence_relations_A.pdf
Publication Type:
3a. Beiträge in begutachteten Zeitschriften; Aufsatz (keine besondere Kategorie)
Language:
Englisch
Keywords:
Deutsch; Deutschland; Empirische Untersuchung; Erwachsener; Grundschule; Grundschüler; Hören; Kognition; Kognitive Prozesse; Lesen; Leseverstehen; Semantik; Technologiebasiertes Testen; Test; Textanalyse; Textinterpretation; Textverständnis
Abstract:
Establishing local coherence relations is central to text comprehension. Positive-causal coherence relations link a cause and its consequence, whereas negative-causal coherence relations add a contrastive meaning (negation) to the causal link. According to the cumulative cognitive complexity approach, negative-causal coherence relations are cognitively more complex than positive-causal ones. Therefore, they require greater cognitive effort during text comprehension and are acquired later in language development. The present cross-sectional study tested these predictions for German primary school children from Grades 1 to 4 and adults in reading and listening comprehension. Accuracy data in a semantic verification task support the predictions of the cumulative cognitive complexity approach. Negative-causal coherence relations are cognitively more demanding than positive-causal ones. Moreover, our findings indicate that children's comprehension of negative-causal coherence relations continues to develop throughout the course of primary school. Findings are discussed with respect to the generalizability of the cumulative cognitive complexity approach to German. (DIPF/Orig.)
DIPF-Departments:
Bildungsqualität und Evaluation
Time-on-task effects in digital reading are non-linear and moderated by persons' skills and tasks' […]
Naumann, Johannes; Goldhammer, Frank
Journal Article
| In: Learning and Individual Differences | 2017
36715 Endnote
Author(s):
Naumann, Johannes; Goldhammer, Frank
Title:
Time-on-task effects in digital reading are non-linear and moderated by persons' skills and tasks' demands
In:
Learning and Individual Differences, 53 (2017) , S. 1-16
DOI:
10.1016/j.lindif.2016.10.002
Publication Type:
3a. Beiträge in begutachteten Zeitschriften; Aufsatz (keine besondere Kategorie)
Language:
Englisch
Keywords:
Digitale Medien; Hypertext; Internationaler Vergleich; Kognitive Prozesse; Leistungsmessung; Lesekompetenz; Lesen; Leseverstehen; Modell; OECD-Länder; PISA <Programme for International Student Assessment>; Problemlösen; Schülerleistung; Technologiebasiertes Testen; Testaufgabe; Testkonstruktion; Wirkung; Zeit
Abstract:
Time-on-task effects on response accuracy in digital reading tasks were examined using PISA 2009 data (N = 34,062, 19 countries/economies). As a baseline, task responses were explained by time on task, tasks' easiness, and persons' digital reading skill (Model 1). Model 2 added a quadratic time-on-task effect, persons' comprehension skill and tasks' navigation demands as predictors. In each country, linear and quadratic time-on-task effects were moderated by person and task characteristics. Strongly positive linear time-on-task effects were found for persons being poor digital readers (Model 1) and poor comprehenders (Model 2), which decreased with increasing skill. Positive linear time-on-task effects were found for hard tasks (Model 1) and tasks high in navigation demands (Model 2). For easy tasks and tasks low in navigation demands, the time-on-task effects were negative, or close to zero, respectively. A negative quadratic component of the time-on-task effect was more pronounced for strong comprehenders, while the linear component was weaker. Correspondingly, for tasks high in navigation demands the negative quadratic component to the time-on-task effect was weaker, and the linear component was stronger. These results are in line with a dual-processing account of digital reading that distinguishes automatic reading components from resource-demanding regulation and navigation processes. (DIPF/Orig.)
DIPF-Departments:
Bildungsqualität und Evaluation
Herausforderungen bei der Schätzung von Trends in Schulleistungsstudien. Eine Skalierung der […]
Robitzsch, Alexander; Lüdtke, Oliver; Köller, Olaf; Kröhne, Ulf; Goldhammer, Frank; […]
Journal Article
| In: Diagnostica | 2017
36898 Endnote
Author(s):
Robitzsch, Alexander; Lüdtke, Oliver; Köller, Olaf; Kröhne, Ulf; Goldhammer, Frank; Heine, Jörg-Henrik
Title:
Herausforderungen bei der Schätzung von Trends in Schulleistungsstudien. Eine Skalierung der deutschen PISA-Daten
In:
Diagnostica, 63 (2017) 2, S. 148-165
DOI:
10.1026/0012-1924/a000177
Publication Type:
3a. Beiträge in begutachteten Zeitschriften; Aufsatz (keine besondere Kategorie)
Language:
Deutsch
Keywords:
Deutschland; Einflussfaktor; Leistungsmessung; Lesekompetenz; Mathematische Kompetenz; Modell; Naturwissenschaftliche Kompetenz; PISA <Programme for International Student Assessment>; Schülerleistung; Schülerleistungstest; Skalierung; Technologiebasiertes Testen; Testauswertung
Abstract:
Internationale Schulleistungsstudien wie das Programme for International Student Assessment (PISA) dienen den teilnehmenden Ländern zur Feststellung der Leistungsfähigkeit ihrer Schulsysteme. In PISA wird die Zielpopulation (15-jährige Schülerinnen und Schüler) alle 3 Jahre getestet. Von besonderer Bedeutung sind dabei die Trendinformationen, die für die Zielpopulation ausweisen, ob sich ihre Leistungen gegenüber denen aus früheren Erhebungen verändert haben. Um solche Trends valide interpretieren zu können, sollten die PISA-Erhebungen unter möglichst vergleichbaren Bedingungen durchgeführt und die verwendeten statistischen Verfahren vergleichbar bleiben. In PISA 2015 wurde erstmalig computerbasiert getestet; zuvor mittels Papier-und-Bleistift-Tests. Es wurde das Skalierungsmodell verändert und in den Naturwissenschaften wurden neue Aufgabenformate eingesetzt. Im vorliegenden Beitrag gehen wir anhand der nationalen PISA-Stichproben von 2000 bis 2015 der Frage nach, inwiefern der Wechsel des Testmodus und der Wechsel des Skalierungsmodells die Interpretation der Trendschätzungen beeinflussen. Die Analysen belegen, dass die Veränderung von Papier-und-Bleistift-Tests auf Computertestung die Trendschätzung für Deutschland verzerrt haben könnte. (DIPF/Orig.)
Abstract (english):
International large-scale assessments, for instance, the Programme for International Student Assessment (PISA), are conducted to provide information on the effectiveness of educational systems. In PISA, the target population of 15-year-old students is assessed every 3 years. Trends show whether competencies have changed for the target population between PISA cycles. To ensure valid trend information, it is necessary to keep the test conditions and statistical methods in all PISA cycles as constant as possible. In PISA 2015, however, several changes were established; the test model changed from paper pencil to computer tests, scaling methods were changed, and new types of tasks were used in science. In this article, we investigate the effects of these changes on trend estimation in PISA using German data from all PISA cycles (2000 - 2015). Findings suggest that the change from paper pencil to computer tests could have biased the trend estimation. (DIPF/Orig.)
DIPF-Departments:
Bildungsqualität und Evaluation
Relating product data to process data from computer-based competency assessment
Goldhammer, Frank; Naumann, Johannes; Rölke, Heiko; Stelter, Annette; Tóth, Krisztina
Book Chapter
| Aus: Leutner, Detlev;Fleischer, Jens;Grünkorn, Juliane;Klieme, Eckhard (Hrsg.): Competence assessment in education: Research, models and instruments | Cham: Springer | 2017
36902 Endnote
Author(s):
Goldhammer, Frank; Naumann, Johannes; Rölke, Heiko; Stelter, Annette; Tóth, Krisztina
Title:
Relating product data to process data from computer-based competency assessment
In:
Leutner, Detlev;Fleischer, Jens;Grünkorn, Juliane;Klieme, Eckhard (Hrsg.): Competence assessment in education: Research, models and instruments, Cham: Springer, 2017 (Methodology of educational measurement and assessment), S. 407-425
DOI:
10.1007/978-3-319-50030-0_24
Publication Type:
4. Beiträge in Sammelwerken; Sammelband (keine besondere Kategorie)
Language:
Englisch
Keywords:
Technologiebasiertes Testen; Aufgabe; Zeit; Dauer; Erfolg; Problemlösen; PIAAC <Programme for the International Assessment of Adult Competencies>; Datenanalyse
Abstract (english):
Competency measurement typically focuses on task outcomes. Taking process data into account (i.e., processing time and steps) can provide new insights into construct-related solution behavior, or confirm assumptions that govern task design. This chapter summarizes four studies to illustrate the potential of behavioral process data for explaining task success. It also shows that generic process measures such as time on task may have different relations to task success, depending on the features of the task and the test-taker. The first study addresses differential effects of time on task on success across tasks used in the OECD Programme for the International Assessment of Adult Competencies (PIAAC). The second study, also based on PIAAC data, investigates at a fine-grained level, how the time spent on automatable subtasks in problem-solving tasks relates to task success. The third study addresses how the number of steps taken during problem solving predicts success in PIAAC problem-solving tasks. In a fourth study, we explore whether successful test-takers can be clustered on the basis of various behavioral process indicators that reflect information problem solving. Finally, we address how to handle unstructured and large sets of process data, and briefly present a process data extraction tool. (DIPF/Orig.)
DIPF-Departments:
Bildungsqualität und Evaluation; Informationszentrum Bildung
Educational process mining. New possibilities for understanding students' problem-solving skills
Tóth, Krisztina; Rölke, Heiko; Goldhammer, Frank; Barkow, Ingo
Book Chapter
| Aus: Csapó, Benő;Funke, Joachim (Hrsg.): The nature of problem solving: Using research to inspire 21st century learning | Paris: OECD Publishing | 2017
37734 Endnote
Author(s):
Tóth, Krisztina; Rölke, Heiko; Goldhammer, Frank; Barkow, Ingo
Title:
Educational process mining. New possibilities for understanding students' problem-solving skills
In:
Csapó, Benő;Funke, Joachim (Hrsg.): The nature of problem solving: Using research to inspire 21st century learning, Paris: OECD Publishing, 2017 , S. 193-209
DOI:
10.1787/9789264273955-14-en
Publication Type:
4. Beiträge in Sammelwerken; Sammelband (keine besondere Kategorie)
Language:
Englisch
Keywords:
Computer; Problemlösen; Schüler; Kompetenz; Schülerleistung; Leistungsmessung; Technologiebasiertes Testen; Datenanalyse; Data Mining; Logdatei; Informationssystem; Datenbank; Visualisierung; Interaktion; Verhalten
Abstract:
The assessment of problem-solving skills heavily relies on computer-based assessment (CBA). In CBA, all student interactions with the assessment system are automatically stored. Using the accumulated data, the individual test-taking processes can be reproduced at any time. Going one step further, recorded processes can even be used to extend the problem-solving assessment itself: the test-taking process-related data gives us the opportunity to 1) examine human-computer interactions via traces left in the log file; 2) map students' response processes to find distinguishable problem-solving strategies; and 3) discover relationships between students' activities and task performance. This chapter describes how to extract process-related information from event logs, how to use these data in problem-solving assessments and describes methods which help discover novel, useful information based on individual problem-solving behaviour. (DIPF/Orig.)
DIPF-Departments:
Bildungsqualität und Evaluation; Informationszentrum Bildung
Unselect matches
Select all matches
Export
<
1
2
3
4
...
7
>
Show all
(70)