Menü Überspringen
Kontakt
Presse
Deutsch
English
Not track
Datenverarbeitung
Suche
Anmelden
DIPF aktuell
Forschung
Infrastrukturen
Institut
Zurück
Kontakt
Presse
Deutsch
English
Not track
Datenverarbeitung
Suche
Startseite
>
Forschung
>
Publikationen
>
Publikationendatenbank
Ergebnis der Suche in der DIPF Publikationendatenbank
Ihre Abfrage:
(Personen: "Hahnel," und "Carolin")
zur erweiterten Suche
Suchbegriff
Nur Open Access
Suchen
Markierungen aufheben
Alle Treffer markieren
Export
41
Inhalte gefunden
Alle Details anzeigen
Readers' perceived task demands and their relation to multiple document comprehension strategies […]
Schoor, Cornelia; Rouet, Jean-Francoise; Artelt, Cordula; Mahlow, Nina; Hahnel, Carolin; […]
Zeitschriftenbeitrag
| In: Learning and Individual Differences | 2021
41225 Endnote
Autor*innen:
Schoor, Cornelia; Rouet, Jean-Francoise; Artelt, Cordula; Mahlow, Nina; Hahnel, Carolin; Kroehne, Ulf; Goldhammer, Frank
Titel:
Readers' perceived task demands and their relation to multiple document comprehension strategies and outcome
In:
Learning and Individual Differences, 88 (2021) , S. 102018
DOI:
10.1016/j.lindif.2021.102018
URL:
https://www.sciencedirect.com/science/article/abs/pii/S1041608021000558?via%3Dihub
Dokumenttyp:
3a. Beiträge in begutachteten Zeitschriften; Aufsatz (keine besondere Kategorie)
Sprache:
Englisch
DIPF-Abteilung:
Lehr und Lernqualität in Bildungseinrichtungen
Applying psychometric modeling to aid feature engineering in predictive log-data analytics. The […]
Zehner, Fabian; Eichmann, Beate; Deribo, Tobias; Harrison, Scott; Bengs, Daniel; Andersen, Nico; […]
Zeitschriftenbeitrag
| In: Journal of Educational Data Mining | 2021
41457 Endnote
Autor*innen:
Zehner, Fabian; Eichmann, Beate; Deribo, Tobias; Harrison, Scott; Bengs, Daniel; Andersen, Nico; Hahnel, Carolin
Titel:
Applying psychometric modeling to aid feature engineering in predictive log-data analytics. The NAEP EDM Competition
In:
Journal of Educational Data Mining, 13 (2021) 2, S. 80-107
DOI:
10.5281/zenodo.5275316
URN:
urn:nbn:de:0111-dipfdocs-250034
URL:
https://nbn-resolving.org/urn:nbn:de:0111-dipfdocs-250034
Dokumenttyp:
3a. Beiträge in begutachteten Zeitschriften; Beitrag in Sonderheft
Sprache:
Englisch
Schlagwörter:
Psychometrie; Modellierung; Protokoll; Datenanalyse; Testverhalten; Cluster
Abstract (english):
The NAEP EDM Competition required participants to predict efficient test-taking behavior based on log data. This paper describes our top-down approach for engineering features by means of psychometric modeling, aiming at machine learning for the predictive classification task. For feature engineering, we employed, among others, the Log-Normal Response Time Model for estimating latent person speed, and the Generalized Partial Credit Model for estimating latent person ability. Additionally, we adopted an n-gram feature approach for event sequences. Furthermore, instead of using the provided binary target label, we distinguished inefficient test takers who were going too fast and those who were going too slow for training a multi-label classifier. Our best-performing ensemble classifier comprised three sets of low-dimensional classifiers, dominated by test-taker speed. While our classifier reached moderate performance, relative to the competition leaderboard, our approach makes two important contributions. First, we show how classifiers that contain features engineered through literature-derived domain knowledge can provide meaningful predictions if results can be contextualized to test administrators who wish to intervene or take action. Second, our re-engineering of test scores enabled us to incorporate person ability into the models. However, ability was hardly predictive of efficient behavior, leading to the conclusion that the target label's validity needs to be questioned. Beyond competition-related findings, we furthermore report a state sequence analysis for demonstrating the viability of the employed tools. The latter yielded four different test-taking types that described distinctive differences between test takers, providing relevant implications for assessment practice. (DIPF/Orig.)
DIPF-Abteilung:
Lehr und Lernqualität in Bildungseinrichtungen
Evaluation of online information in university students. Development and scaling of the screening […]
Hahnel, Carolin; Eichmann, Beate; Goldhammer, Frank
Zeitschriftenbeitrag
| In: Frontiers in Psychology | 2020
40881 Endnote
Autor*innen:
Hahnel, Carolin; Eichmann, Beate; Goldhammer, Frank
Titel:
Evaluation of online information in university students. Development and scaling of the screening instrument EVON
In:
Frontiers in Psychology, (2020) , S. 11:562128
DOI:
10.3389/fpsyg.2020.562128
URN:
urn:nbn:de:0111-pedocs-232241
URL:
https://nbn-resolving.org/urn:nbn:de:0111-pedocs-232241
Dokumenttyp:
3a. Beiträge in begutachteten Zeitschriften; Aufsatz (keine besondere Kategorie)
Sprache:
Englisch
Schlagwörter:
Deutschland; Internet; Informationskompetenz; Ressource; Glaubwürdigkeit; Relevanz; Bewertung; Test; Testentwicklung; Itemanalyse; Suchmaschine; Simulation; Technologiebasiertes Testen; Interview; Erhebungsinstrument; Evaluation; Student; Rasch-Modell; Empirische Untersuchung;
Abstract:
As Internet sources provide information of varying quality, it is an indispensable prerequisite skill to evaluate the relevance and credibility of online information. Based on the assumption that competent individuals can use different properties of information to assess its relevance and credibility, we developed the EVON (evaluation of online information), an interactive computer-based test for university students. The developed instrument consists of eight items that assess the skill to evaluate online information in six languages. Within a simulated search engine environment, students are requested to select the most relevant and credible link for a respective task. To evaluate the developed instrument, we conducted two studies: (1) a pre-study for quality assurance and observing the response process (cognitive interviews of n = 8 students) and (2) a main study aimed at investigating the psychometric properties of the EVON and its relation to other variables (n = 152 students). The results of the pre-study provided first evidence for a theoretically sound test construction with regard to students' item processing behavior. The results of the main study showed acceptable psychometric outcomes for a standardized screening instrument with a small number of items. The item design criteria affected the item difficulty as intended, and students' choice to visit a website had an impact on their task success. Furthermore, the probability of task success was positively predicted by general cognitive performance and reading skill. Although the results uncovered a few weaknesses (e.g., a lack of difficult items), and the efforts of validating the interpretation of EVON outcomes still need to be continued, the overall results speak in favor of a successful test construction and provide first indication that the EVON assesses students' skill in evaluating online information in search engine environments. (DIPF/Orig.)
DIPF-Abteilung:
Bildungsqualität und Evaluation
Entwicklung und Skalierung eines Tests zur Erfassung des Verständnisses multipler Dokumente von […]
Schoor, Cornelia; Hahnel, Carolin; Artelt, Cordula; Reimann, Daniel; Kroehne, Ulf; Goldhammer, Frank
Zeitschriftenbeitrag
| In: Diagnostica | 2020
40128 Endnote
Autor*innen:
Schoor, Cornelia; Hahnel, Carolin; Artelt, Cordula; Reimann, Daniel; Kroehne, Ulf; Goldhammer, Frank
Titel:
Entwicklung und Skalierung eines Tests zur Erfassung des Verständnisses multipler Dokumente von Studierenden
In:
Diagnostica, 66 (2020) 2, S. 123-135
DOI:
10.1026/0012-1924/a000231
URN:
urn:nbn:de:0111-pedocs-218434
URL:
https://nbn-resolving.org/urn:nbn:de:0111-pedocs-218434
Dokumenttyp:
3a. Beiträge in begutachteten Zeitschriften; Aufsatz (keine besondere Kategorie)
Sprache:
Deutsch
Schlagwörter:
Testkonstruktion; Student; Messung; Textverständnis; Quelle; Inhalt; Dokument; Diagnostischer Test; Kompetenz; Datenerfassung; Datenanalyse; Modell; Skalierung; Validität
Abstract:
Das Verständnis multipler Dokumente (Multiple Document Comprehension, MDC) wird als Fähigkeit verstanden, aus verschiedenen Informationsquellen eine integrierte Repräsentation eines inhaltlichen Gegenstandsbereichs zu konstruieren. Als solche ist sie sowohl für die erfolgreiche Bewältigung eines Studiums als auch für gesellschaftliche Partizipation eine wichtige Kompetenz. Bislang gibt es jedoch kein etabliertes Diagnostikum in diesem Bereich. Um diese Lücke zu schließen, wurde ein Test entwickelt, der vier zentrale kognitive Anforderungen von MDC abdeckt und auf Basis der Daten von 310 Studierenden sozial- und geisteswissenschaftlicher Fächer überprüft wurde. Die im MDC-Test gemessene Kompetenz erwies sich als eindimensional. Der MDC-Testwert wies theoriekonforme Zusammenhänge mit der Abiturnote, dem Studienabschnitt und der Leistung in einer Essay-Aufgabe auf. Insgesamt liefern die Ergebnisse empirische Belege dafür, dass der Testwert aus dem MDC-Test die fächerübergreifende Fähigkeit von Studierenden wiedergibt, multiple Dokumente zu verstehen. (DIPF/Orig.)
Abstract (english):
Multiple document comprehension (MDC) is defined as the ability to construct an integrated representation based on different sources of information on a particular topic. It is an important competence for both the successful accomplishment of university studies and participation in societal discussions. Yet, there is no established assessment instrument for MDC. Therefore, we developed a test covering four theory-based cognitive requirements of MDC. Based on the data of 310 university students of social sciences and humanities, the MDC test proved to be a unidimensional measure. Furthermore, the test score was related to the final school exam grade, the study level (bachelor / master), and the performance in an essay task. The empirical results suggest that the score of the MDC test can be interpreted as the generic competence of university students to understand multiple documents. (DIPF/Orig.)
DIPF-Abteilung:
Bildungsqualität und Evaluation
PISA reading. Mode effects unveiled in short text responses
Zehner, Fabian; Kroehne, Ulf; Hahnel, Carolin; Goldhammer, Frank
Zeitschriftenbeitrag
| In: Psychological Test and Assessment Modeling | 2020
39911 Endnote
Autor*innen:
Zehner, Fabian; Kroehne, Ulf; Hahnel, Carolin; Goldhammer, Frank
Titel:
PISA reading. Mode effects unveiled in short text responses
In:
Psychological Test and Assessment Modeling, 62 (2020) 1, S. 85-105
URN:
urn:nbn:de:0111-pedocs-203542
URL:
https://www.psychologie-aktuell.com/fileadmin/Redaktion/Journale/ptam-2020-1/05_Zehner.pdf
Dokumenttyp:
3a. Beiträge in begutachteten Zeitschriften; Beitrag in Sonderheft
Sprache:
Englisch
Schlagwörter:
PISA <Programme for International Student Assessment>; Deutschland; Schülerleistung; Leistungstest; Computerunterstütztes Verfahren; Papier; Bleistift; Antwort; Text; Inhalt; Information; Quantität; Methodenwechsel; Effekt; Wirkungsforschung; Datenanalyse; Sekundäranalyse
Abstract (english):
Educational largescale assessments risk their temporal comparability when shifting from paperto computerbased assessment. A recent study showed how text responses have altered alongside PISA's mode change, indicating mode effects. Uncertainty remained, however, because it compared students from 2012 and 2015. We aimed at reproducing the findings in an experimental setting, in which n = 836 students answered PISA reading questions on computer, paper, or both. Text response features for information quantity and relevance were extracted automatically. Results show a comprehensive recovery of findings. Students incorporated more information into their text responses on computer than on paper, with some items being more affected than others. Regarding information relevance, we found less mode effect variance across items than the original study. Hints for a relationship between mode effect and gender across items could be reproduced. The study demonstrates the stability of linguistic feature extraction from text responses. (DIPF/Orig.)
DIPF-Abteilung:
Bildungsqualität und Evaluation
Aktueller Stand der Lesekompetenz in PISA 2018
Weis, Mirjam; Doroganova, Anastasia; Hahnel, Carolin; Becker-Mrotzek, Michael; Lindauer, Thomas; […]
Zeitschriftenbeitrag
| In: Schulmanagement-Handbuch | 2020
39951 Endnote
Autor*innen:
Weis, Mirjam; Doroganova, Anastasia; Hahnel, Carolin; Becker-Mrotzek, Michael; Lindauer, Thomas; Artelt, Cordula; Reiss, Kristina
Titel:
Aktueller Stand der Lesekompetenz in PISA 2018
In:
Schulmanagement-Handbuch, (2020) 173, S. 9-19
Dokumenttyp:
3b. Beiträge in weiteren Zeitschriften; praxisorientiert
Sprache:
Deutsch
Schlagwörter:
Lesekompetenz; PISA <Programme for International Student Assessment>; Schülerleistung; Leistungsmessung; Aufgabe; Beispiel; Lesen; Jugendlicher; Leistungssteigerung; Leseförderung; Wirkung; Schulform; Geschlechtsspezifischer Unterschied; Internationaler Vergleich; OECD-Länder; Deutschland
Abstract:
Welche Lehren können aus den Ergebnissen zur Lesekompetenz der PISA-Erhebung 2018 gezogen werden? Erfreulich ist, dass die Lesekompetenz der Fünfzehnjährigen in Deutschland im Vergleich zum OECD-Mittelwert überdurchschnittlich ausgeprägt ist. Zudem hat sich der Anteil der besonders lesestarken Jugendlichen in Deutschland bei PISA 2018 im Vergleich zum Jahr 2009 signifikant vergrößert. Somit gibt es in Deutschland im internationalen Vergleich einen relativ großen Anteil an Fünfzehnjährigen, die hochkompetent im Lesen sind und optimale Voraussetzungen für ein selbstständiges Lernen mittels Lesen mitbringen. Es ist wichtig, in der Förderung der Spitzengruppe nicht nachzulassen, um so ihren Anteil aufrechterhalten und ausbauen zu können. Wenig erfreulich ist der große Anteil besonders lernschwacher Schülerinnen und Schüler in Deutschland. Mehr als ein Fünftel der Fünfzehnjährigen zeigt Kompetenzen unterhalb der Kompetenzstufe II und ist damit kaum in der Lage, sinnerfassend mit Texten umzugehen. Insbesondere an nicht gymnasialen Schularten ist der Anteil leseschwacher Jugendlicher im Vergleich zu 2009 bedeutsam angestiegen. Dies ist ein besorgniserregender Befund, der zur Konsequenz haben sollte, dass leseschwache Kinder und Jugendliche in Deutschland noch stärker gefördert werden. Die Forschung hat gezeigt, dass die durchgängige Leseförderung vom Vorschulbereich bis zum Ende der Schulzeit besonders wirksam ist. Insgesamt legen die Ergebnisse aus PISA 2018 zum Lesen in Deutschland nahe, dass eine Förderung des besonders leseschwachen Schülerinnen und Schüler in den Blick genommen werden muss, um die Risikogruppe der Jugendlichen, die sich auf den untersten Kompetenzstufen befinden, deutlich zu verringern. (DIPF/Orig.)
DIPF-Abteilung:
Bildungsqualität und Evaluation
The NAEP EDM competition. On the value of theory-driven psychometrics and machine learning for […]
Zehner, Fabian; Harrison, Scott; Eichmann, Beate; Deribo, Tobias; Bengs, Daniel; Andersen, Nico; […]
Sammelbandbeitrag
| Aus: Rafferty, Anna N.; Whitehill, Jacob; Romero, Cristobal; Cavalli-Sforza, Violetta (Hrsg.): Proceedings of the 13th International Conference on Educational Data Mining (EDM 2020) | Worcester; MA: International Educational Data Mining Society | 2020
40325 Endnote
Autor*innen:
Zehner, Fabian; Harrison, Scott; Eichmann, Beate; Deribo, Tobias; Bengs, Daniel; Andersen, Nico; Hahnel, Carolin
Titel:
The NAEP EDM competition. On the value of theory-driven psychometrics and machine learning for predictions based on log data
Aus:
Rafferty, Anna N.; Whitehill, Jacob; Romero, Cristobal; Cavalli-Sforza, Violetta (Hrsg.): Proceedings of the 13th International Conference on Educational Data Mining (EDM 2020), Worcester; MA: International Educational Data Mining Society, 2020 , S. 302-312
URL:
https://educationaldatamining.org/files/conferences/EDM2020/papers/paper_118.pdf
Dokumenttyp:
4. Beiträge in Sammelbänden; Tagungsband/Konferenzbeitrag/Proceedings
Sprache:
Englisch
Schlagwörter:
Logdatei; Psychometrie; Theorie; Technologische Entwicklung; Wettbewerb; Prognose; Test; Erfolg
Abstract:
The 2nd Annual WPI-UMASS-UPENN EDM Data Minng Challenge required contestants to predict efficient test-taking based on log data. In this paper, we describe our theory-driven and psychometric modeling approach. For feature engineering, we employed the Log-Normal Response Time Model for estimating latent person speed, and the Generalized Partial Credit Model for estimating latent person ability. Additionally, we adopted an n-gram feature approach for event sequences. For training a multi-label classifier, we distinguished inefficient test takers who were going too fast and those who were going too slow, instead of using the provided binary target label. Our best-performing ensemble classifier comprised three sets of low-dimensional classifiers, dominated by test-taker speed. While our classifier reached moderate performance, relative to competition leaderboard, our approach makes two important contributions. First, we show how explainable classifiers could provide meaningful predictions if results can be contextualized to test administrators who wish to intervene or take action. Second, our re-engineering of test scores enabled us to incorporate person ability into the estimation. However, ability was hardly predictive of efficient behavior, leading to the conclusion that the target label's validity needs to be questioned. The paper concludes with tools that are helpful for substantively meaningful log data mining. (DIPF/Orig.)
DIPF-Abteilung:
Bildungsqualität und Evaluation
Analysing log file data from PIAAC
Goldhammer, Frank; Hahnel, Carolin; Kroehne, Ulf
Sammelbandbeitrag
| Aus: Maehler, Débora B.; Rammstedt, Beatrice (Hrsg.): Large-scale cognitive assessment: Analyzing PIAAC data | Cham: Springer | 2020
40529 Endnote
Autor*innen:
Goldhammer, Frank; Hahnel, Carolin; Kroehne, Ulf
Titel:
Analysing log file data from PIAAC
Aus:
Maehler, Débora B.; Rammstedt, Beatrice (Hrsg.): Large-scale cognitive assessment: Analyzing PIAAC data, Cham: Springer, 2020 (Methodology of Educational Measurement and Assessment), S. 239-269
DOI:
10.1007/978-3-030-47515-4_10
URL:
https://link.springer.com/chapter/10.1007/978-3-030-47515-4_10
Dokumenttyp:
4. Beiträge in Sammelwerken; Sammelband (keine besondere Kategorie)
Sprache:
Englisch
Schlagwörter:
PIAAC (Programme for the International Assessment of Adult Competencies); Technologiebasiertes Testen; Computerunterstütztes Verfahren; Logdatei; Datenanalyse; Software; Tools; Nutzung; Forschung; Zugang; Dokumentation
Abstract:
The OECD Programme for the International Assessment of Adult Competencies (PIAAC) was the first computer-based large-scale assessment to provide anonymised log file data from the cognitive assessment together with extensive online documentation and a data analysis support tool. The goal of the chapter is to familiarise researchers with how to access, understand, and analyse PIAAC log file data for their research purposes. After providing some conceptual background on the multiple uses of log file data and how to infer states of information processing from log file data, previous research using PIAAC log file data is reviewed. Then, the accessibility, structure, and documentation of the PIAAC log file data are described in detail, as well as how to use the PIAAC LogDataAnalyzer to extract predefined process indicators and how to create new process indicators based on the raw log data export.
DIPF-Abteilung:
Bildungsqualität und Evaluation
Multiple document comprehension of university students. Test development and relations to person […]
Schoor, Cornelia; Hahnel, Carolin; Mahlow, Nina; Klagges, Jorge; Kroehne, Ulf; Goldhammer, Frank; […]
Sammelbandbeitrag
| Aus: Zlatkin-Troitschanskaia, Olga; Pant, Hans Anand; Toepper, Miriam; Lautenbach, Corinna (Hrsg.): Student learning in German higher education: Innovative measurement approaches and ersearch results | Wiesbaden: Springer | 2020
39888 Endnote
Autor*innen:
Schoor, Cornelia; Hahnel, Carolin; Mahlow, Nina; Klagges, Jorge; Kroehne, Ulf; Goldhammer, Frank; Artelt, Cordula
Titel:
Multiple document comprehension of university students. Test development and relations to person and process characteristics
Aus:
Zlatkin-Troitschanskaia, Olga; Pant, Hans Anand; Toepper, Miriam; Lautenbach, Corinna (Hrsg.): Student learning in German higher education: Innovative measurement approaches and ersearch results, Wiesbaden: Springer, 2020 , S. 221-240
DOI:
10.1007/978-3-658-27886-1_11
URL:
https://link.springer.com/chapter/10.1007/978-3-658-27886-1_11
Dokumenttyp:
4. Beiträge in Sammelwerken; Sammelband (keine besondere Kategorie)
Sprache:
Englisch
Abstract:
Multiple document comprehension is the ability to construct an integrated representation of a specific topic based on several sources. It is an important competence for university students; however, there has been so far no established instrument to assess multiple document comprehension in a standardized way. Therefore, we developed a test covering four theory-based cognitive requirements: The corroboration of information across texts, the integration of information across texts, the comparison of sources and source evaluations across texts, and the comparison of source-content links across texts. The developed test comprised 174 items and was empirically examined in a study with 310 university students. Several items had to be excluded due to psychometric misfit and differential item functioning. The resulting final test contains 67 items within 5 units (i.e., test structures of 2-3 texts and related items) and has been shown to fit a unidimensional IRT Rasch model. The test score showed expected relationships to the final school exam grade, the study level (Bachelor/Master), essay performance, sourcing behavior, as well as mental load and mental effort. (DIPF/Orig.)
DIPF-Abteilung:
Bildungsqualität und Evaluation
Validating process variables of sourcing in an assessment of multiple document comprehension
Hahnel, Carolin; Kroehne, Ulf; Goldhammer, Frank; Schoor, Cornelia; Mahlow, Nina; Artelt, Cordula
Zeitschriftenbeitrag
| In: British Journal of Educational Psychology | 2019
39118 Endnote
Autor*innen:
Hahnel, Carolin; Kroehne, Ulf; Goldhammer, Frank; Schoor, Cornelia; Mahlow, Nina; Artelt, Cordula
Titel:
Validating process variables of sourcing in an assessment of multiple document comprehension
In:
British Journal of Educational Psychology, 89 (2019) 3, S. 524-537
DOI:
10.1111/bjep.12278
URN:
urn:nbn:de:0111-dipfdocs-191514
URL:
https://onlinelibrary.wiley.com/doi/full/10.1111/bjep.12278
Dokumenttyp:
3a. Beiträge in begutachteten Zeitschriften; Beitrag in Sonderheft
Sprache:
Englisch
Schlagwörter:
Leistungsmessung; Technologiebasiertes Testen; Dokument; Verstehen; Quelle; Information; Strategie; Logdatei; Indikator; Validität; Student; Universität; Deutschland
Abstract:
Background: With digital technologies, competence assessments can provide process data, such as mouse clicks with corresponding timestamps, as additional information about the skills and strategies of test takers. However, in order to use variables generated from process data sensibly for educational purposes, their interpretation needs to be validated with regard to their intended meaning. Aims: This study seeks to demonstrate how process data from an assessment of multiple document comprehension can be used to represent sourcing, which summarizes activities for the consideration of the origin and intention of documents. The investigated process variables were created according to theoretical assumptions about sourcing, and systematically tested for differences between persons, units (i.e., documents and items), and properties of the test administration. Sample: The sample included 310 German university students (79.4% female), enrolled in several bachelor's or master's programmes of the social sciences and humanities. Methods: Regarding the hierarchical data structure, the hypotheses were analysed with generalized linear mixed models (GLMM). Results: The results mostly revealed expected differences between individuals and units. However, unexpected effects of the administered order of units and documents were detected. Conclusions: The study demonstrates the theory‐informed construction of process variables from log‐files and an approach for empirical validation of their interpretation. The results suggest that students apply sourcing for different reasons, but also stress the need of further validation studies and refinements in the operationalization of the indicators investigated. (DIPF/Orig.)
DIPF-Abteilung:
Bildungsqualität und Evaluation
Markierungen aufheben
Alle Treffer markieren
Export
<
1
2
3
(aktuell)
4
>
Alle anzeigen
(41)