Menü Überspringen
Contact
Deutsch
English
Not track
Data Protection
Search
Log in
DIPF News
Research
Infrastructures
Institute
Zurück
Contact
Deutsch
English
Not track
Data Protection
Search
Home
>
Research
>
Publications
>
Open Access Repository
Search results in the DIPF database of publications
Your query:
(Schlagwörter: "Antwort")
Advanced Search
Search term
Search
Unselect matches
Select all matches
Export
100
items matching your search terms.
Show all details
Invariance of the response processes between gender and modes in an assessment of reading
Kroehne, Ulf; Hahnel, Carolin; Goldhammer, Frank
Journal Article
| In: Frontiers in Applied Mathematics and Statistics | 2019
39231 Endnote
Author(s):
Kroehne, Ulf; Hahnel, Carolin; Goldhammer, Frank
Title:
Invariance of the response processes between gender and modes in an assessment of reading
In:
Frontiers in Applied Mathematics and Statistics, (2019) , S. 5:2
DOI:
10.3389/fams.2019.00002
URL:
https://www.frontiersin.org/articles/10.3389/fams.2019.00002/full
Publication Type:
3a. Beiträge in begutachteten Zeitschriften; Beitrag in Sonderheft
Language:
Englisch
Keywords:
Lesefertigkeit; Technologiebasiertes Testen; Computerunterstütztes Verfahren; Papier-Bleistift-Test; Antwort; Zeit; Messung; Item-Response-Theory; Modell; Geschlechtsspezifischer Unterschied; Logdatei; Datenanalyse; Empirische Untersuchung; Deutschland
Abstract:
In this paper, we developed a method to extract item-level response times from log data that are available in computer-based assessments (CBA) and paper-based assessments (PBA) with digital pens. Based on response times that were extracted using only time differences between responses, we used the bivariate generalized linear IRT model framework (B-GLIRT, [1]) to investigate response times as indicators for response processes. A parameterization that includes an interaction between the latent speed factor and the latent ability factor in the cross-relation function was found to fit the data best in CBA and PBA. Data were collected with a within-subject design in a national add-on study to PISA 2012 administering two clusters of PISA 2009 reading units. After investigating the invariance of the measurement models for ability and speed between boys and girls, we found the expected gender effect in reading ability to coincide with a gender effect in speed in CBA. Taking this result as indication for the validity of the time measures extracted from time differences between responses, we analyzed the PBA data and found the same gender effects for ability and speed. Analyzing PBA and CBA data together we identified the ability mode effect as the latent difference between reading measured in CBA and PBA. Similar to the gender effect the mode effect in ability was observed together with a difference in the latent speed between modes. However, while the relationship between speed and ability is identical for boys and girls we found hints for mode differences in the estimated parameters of the cross-relation function used in the B-GLIRT model. (DIPF/Orig.)
DIPF-Departments:
Bildungsqualität und Evaluation
Sensitivity of test items to teaching quality
Naumann, Alexander; Rieser, Svenja; Musow, Stephanie; Hochweber, Jan; Hartig, Johannes
Journal Article
| In: Learning and Instruction | 2019
38989 Endnote
Author(s):
Naumann, Alexander; Rieser, Svenja; Musow, Stephanie; Hochweber, Jan; Hartig, Johannes
Title:
Sensitivity of test items to teaching quality
In:
Learning and Instruction, 60 (2019) , S. 41-53
DOI:
10.1016/j.learninstruc.2018.11.002
URL:
https://www.sciencedirect.com/science/article/pii/S0959475217307065?via%3Dihub
Publication Type:
3a. Beiträge in begutachteten Zeitschriften; Aufsatz (keine besondere Kategorie)
Language:
Englisch
Keywords:
Leistungstest; Testkonstruktion; Unterricht; Qualität; Einflussfaktor; Testauswertung; Grundschule; Naturwissenschaftlicher Unterricht; Aktives Lernen; Entdeckendes Lernen; Unterrichtsmethode; Wirkung; Messverfahren; Testaufgabe; Problemlösen; Grundschüler; Dauer; Antwort; Schwierigkeit; Datenanalyse; Interpretation; Quasi-Experiment; Deutschland
Abstract:
Instructional sensitivity is the psychometric capacity of tests or single items of capturing effects of classroom instruction. Yet, current item sensitivity measures' relationship to (a) actual instruction and (b) overall test sensitivity is rather unclear. The present study aims at closing these gaps by investigating test and item sensitivity to teaching quality, reanalyzing data from a quasi-experimental intervention study in primary school science education (1026 students, 53 classes, Mage = 8.79 years, SDage = 0.49, 50% female). We examine (a) the correlation of item sensitivity measures and the potential for cognitive activation in class and (b) consequences for test score interpretation when assembling tests from items varying in their degree of sensitivity to cognitive activation. Our study (a) provides validity evidence that item sensitivity measures may be related to actual classroom instruction and (b) points out that inferences on teaching drawn from test scores may vary due to test composition. (DIPF/Orig.)
DIPF-Departments:
Bildungsqualität und Evaluation
Assessments and accountability in secondary education. International trends
Teltemann, Janna; Jude, Nina
Journal Article
| In: Research in Comparative and International Education | 2019
39188 Endnote
Author(s):
Teltemann, Janna; Jude, Nina
Title:
Assessments and accountability in secondary education. International trends
In:
Research in Comparative and International Education, 14 (2019) 2, S. 249-271
DOI:
10.1177/1745499919846174
URL:
https://journals.sagepub.com/doi/10.1177/1745499919846174
Publication Type:
3a. Beiträge in begutachteten Zeitschriften; Aufsatz (keine besondere Kategorie)
Language:
Englisch
Keywords:
Sekundarbereich; Schüler; Leistungsbeurteilung; Verantwortung; Vorkommen; Schülerleistungstest; Bildungsmonitoring; Trend; Internationaler Vergleich
Abstract (english):
International Large-scale Student Assessments are the most prominent example of internationalization processes in education. A number of studies have analysed the policy reactions to such studies, particularly to the Organisation for Economic Co-operation and Development (OECD) Programme for International Student Assessment (PISA) study. PISA and comparable projects have also raised concerns of a feared increase of assessments and accountability procedures. So far, systematic empirical evidence that could corroborate such concerns has been scarce. In this paper, we examine the prevalence of assessment and accountability practices at secondary education level as well as changes in these practices over time. We explicitly focus on changes over time by drawing on data from PISA 2000 to PISA 2015. Analyses over time are not straightforward with PISA, as the questionnaires change between survey rounds. This leads to different coverage of specific indicators over time. We present descriptive analyses for 20 OECD countries. The results show an increasing trend for the vast majority of the generated indicators, indicating that assessments and the use of assessments for purposes of accountability increased within the larger part of the OECD during the last 15 years. Likewise, more horizontal, peer-oriented evaluation procedures focusing on organizational learning gained importance. A cluster analysis based on selected indicators of 2015 revealed that there are four distinct groups of countries, which are mainly distinguished by different levels of the prevalence of assessment, accountability and evaluation practices. (DIPF/Orig.)
DIPF-Departments:
Bildungsqualität und Evaluation
Unattended consequences. How text responses alter alongside PISA's mode change from 2012 to 2015
Zehner, Fabian; Goldhammer, Frank; Lubaway, Emily; Sälzer, Christine
Journal Article
| In: Education Inquiry | 2019
38711 Endnote
Author(s):
Zehner, Fabian; Goldhammer, Frank; Lubaway, Emily; Sälzer, Christine
Title:
Unattended consequences. How text responses alter alongside PISA's mode change from 2012 to 2015
In:
Education Inquiry, 10 (2019) 1, S. 34-55
DOI:
10.1080/20004508.2018.1518080
URL:
https://www.tandfonline.com/doi/pdf/10.1080/20004508.2018.1518080?needAccess=true
Publication Type:
3a. Beiträge in begutachteten Zeitschriften; Beitrag in Sonderheft
Language:
Englisch
Keywords:
PISA <Programme for International Student Assessment>; Schülerleistungstest; Design; Veränderung; Computerunterstütztes Verfahren; Lesetest; Antwort; Text; Unterschied; Information; Relevanz; Schuljahr 09; Empirische Untersuchung; Deutschland
Abstract (english):
In 2015, the Programme for International Student Assessment (PISA) introduced multiple changes in its study design, the most extensive being the transition from paper- to computer-based assessment. We investigated the differences between German students' text responses to eight reading items from the paper-based study in 2012 to text responses to the same items from the computer-based study in 2015. Two response features - information quantity and relevance proportion - were extracted by natural language processing techniques because they are crucial indicators for the response process. Showcasing potential differential relationships, we additionally examined gender differences. Modelling effects of the round of assessment, gender, and response correctness on the response features, we analysed responses from 15-year-olds and ninth-graders in Germany. Results revealed differences in the text responses between the rounds of assessment in that students included more information overall in 2015, and the proportions of relevance varied substantially across items. As the study investigated the mode change in PISA's natural (not experimental) setting, the differences could mirror cohort trends or design changes. However, with the evidence reported, we conclude that the differences could indicate mode effects. (DIPF/Orig.)
DIPF-Departments:
Bildungsqualität und Evaluation
Steuerung und Qualitätsentwicklung im Bremer Schulsystem
Feldhoff, Tobias; Wurster, Sebastian; Rettinger, Tanja; Hausen, Joshua; Neumann, Marko
Book Chapter
| Aus: Maaz, Kai; Hasselhorn, Marcus; Idel, Till-Sebastian; Klieme, Eckhard; Lütje-Klose, Birgit; Stanat, Petra; Neumann, Marko; Bachsleitner, Anna; Lühe, Josefine; Schipolowski, Stefan (Hrsg.): Zweigliedrigkeit und Inklusion im empirischen Fokus: Ergebnisse der Evaluation der Bremer Schulreform | Münster: Waxmann | 2019
39094 Endnote
Author(s):
Feldhoff, Tobias; Wurster, Sebastian; Rettinger, Tanja; Hausen, Joshua; Neumann, Marko
Title:
Steuerung und Qualitätsentwicklung im Bremer Schulsystem
In:
Maaz, Kai; Hasselhorn, Marcus; Idel, Till-Sebastian; Klieme, Eckhard; Lütje-Klose, Birgit; Stanat, Petra; Neumann, Marko; Bachsleitner, Anna; Lühe, Josefine; Schipolowski, Stefan (Hrsg.): Zweigliedrigkeit und Inklusion im empirischen Fokus: Ergebnisse der Evaluation der Bremer Schulreform, Münster: Waxmann, 2019 , S. 177-216
Publication Type:
4. Beiträge in Sammelwerken; Sammelband (keine besondere Kategorie)
Language:
Deutsch
Keywords:
Schulleitung; Schulaufsicht; Befragung; Empirische Untersuchung; Expertise; Bremen; Deutschland; Schulsystem; Qualitätsentwicklung; Steuerung; Schulentwicklung; Qualitätssicherung; Verfahren; Lernstandserhebung; Evaluation; Zentrale Prüfung; Schulstatistik; Daten; Bewertung; Nutzung; Schule; Selbstverantwortung; Schulprogramm; Akteur; Kooperation; Unterstützung
Abstract:
Gegenstand des vorliegenden Kapitels ist die Steuerung und Qualitätsentwicklung des Bremer Schulsystems aus der Perspektive der Schulleitungen und Schulaufsichten. Im Zentrum des ersten Themenschwerpunkts steht dabei die Bewertung und Nutzung der verschiedenen im Land Bremen vorhandenen Instrumente der Qualitätssicherung und -entwicklung (z.B. Vergleichsarbeiten, interne Evaluation, Schulprogramm etc.). Im zweiten Themenschwerpunkt werden Befunde zur Zusammenarbeit zwischen Schulen und Schulaufsicht sowie zur wahrgenommenen Unterstützung der Qualitätsentwicklung in Schulen durch das Landesinstitut für Schule (LIS, Bremen), das Lehrerfortbildungsinstitut (LFI, Bremerhaven) und die Senatorin für Kinder und Bildung (SKB) als oberste Landesbehörde berichtet. (DIPF/Orig.)
DIPF-Departments:
Bildungsqualität und Evaluation; Struktur und Steuerung des Bildungswesens
Vertiefende Analysen zur Umstellung des Modus von Papier auf Computer
Goldhammer, Frank; Harrison, Scott; Bürger, Sarah; Kroehne, Ulf; Lüdtke, Oliver; […]
Book Chapter
| Aus: Reiss, Kristina; Weis, Mirjam; Klieme, Eckhard; Köller, Olaf (Hrsg.): PISA 2018: Grundbildung im internationalen Vergleich | Münster: Waxmann | 2019
39806 Endnote
Author(s):
Goldhammer, Frank; Harrison, Scott; Bürger, Sarah; Kroehne, Ulf; Lüdtke, Oliver; Robitzsch, Alexander; Köller, Olaf; Heine, Jörg-Henrik; Mang, Julia
Title:
Vertiefende Analysen zur Umstellung des Modus von Papier auf Computer
In:
Reiss, Kristina; Weis, Mirjam; Klieme, Eckhard; Köller, Olaf (Hrsg.): PISA 2018: Grundbildung im internationalen Vergleich, Münster: Waxmann, 2019 , S. 163-186
URL:
https://www.pisa.tum.de/fileadmin/w00bgi/www/Berichtsbaende_und_Zusammenfassungungen/PISA_2018_Berichtsband_online_29.11.pdf#page=163
Publication Type:
4. Beiträge in Sammelwerken; Sammelband (keine besondere Kategorie)
Language:
Deutsch
Keywords:
PISA <Programme for International Student Assessment>; Papier-Bleistift-Test; Technologiebasiertes Testen; Veränderung; Methode; Wirkung; Computerunterstütztes Verfahren; Testaufgabe; Antwort; Schwierigkeit; Lesen; Mathematik; Naturwissenschaften; Testkonstruktion; Testdurchführung; Korrelation; Vergleich; Deutschland
Abstract:
In PISA 2015 wurde der Erhebungsmodus von Papier zu Computer umgestellt. Eine nationale Ergänzungsstudie im Rahmen von PISA 2018 hatte entsprechend das Ziel, vertiefende Analysen zu möglichen Unterschieden papierbasierter und computerbasierter Messungen durchzuführen. Im Fokus standen die Vergleichbarkeit des gemessenen Konstrukts und der einzelnen Aufgaben (Items), beispielsweise hinsichtlich ihrer Schwierigkeit. Darüber hinaus wurden die Auswirkungen des Moduswechsels auf die Vergleichbarkeit mit den Ergebnissen früherer PISA-Erhebungen in Deutschland untersucht. Als empirische Basis wurden Daten aus dem PISA-2015-Feldtest genutzt sowie Daten, die im Rahmen der nationalen PISA-Haupterhebung 2018 an einem zweiten Testtag mit papierbasierten Testheften aus PISA 2009 zusätzlich erhoben wurden. Erste Ergebnisse der Ergänzungsstudie liefern Belege für die Konstruktäquivalenz zwischen papier- und computerbasierten Messungen. Zudem weisen die Daten der Ergänzungsstudie darauf hin, dass die computerbasierten Items im Mittel etwas schwieriger sind als die papierbasierten Items. Hinsichtlich der Veränderungen zwischen 2015 und 2018 zeigt sich eine hohe Übereinstimmung von international berichtetem (originalem) und nationalem (marginalem) Trend. Die Veränderungen zwischen 2009 und 2018 fallen für den nationalen Trend, der allein auf papierbasierten Messungen beruht, insgesamt etwas günstiger aus als für den originalen Trend. (DIPF/Orig.)
DIPF-Departments:
Bildungsqualität und Evaluation
Delegation und Entgrenzung. Zur Bedeutung der Diagnostik in der Sonderpädagogik
Leser, Christoph; Jornitz, Sieglinde
Book Chapter
| Aus: Ellinger, Stephan; Schott-Leser, Hannah (Hrsg.): Rekonstruktionen sonderpädagogischer Praxis: Eine Fallsammlung für die Lehrerbildung | Opladen: Budrich | 2019
39155 Endnote
Author(s):
Leser, Christoph; Jornitz, Sieglinde
Title:
Delegation und Entgrenzung. Zur Bedeutung der Diagnostik in der Sonderpädagogik
In:
Ellinger, Stephan; Schott-Leser, Hannah (Hrsg.): Rekonstruktionen sonderpädagogischer Praxis: Eine Fallsammlung für die Lehrerbildung, Opladen: Budrich, 2019 , S. 103-126
Publication Type:
4. Beiträge in Sammelwerken; Sammelband (keine besondere Kategorie)
Language:
Deutsch
Keywords:
Sonderpädagogik; Diagnostik; Förderschule; Sprachheilschule; Sonderpädagoge; Reflexion <Phil>; Fallbeispiel; Schüler; Verhaltensauffälligkeit; Mutter; Kommunikation; Sonderschullehrer; Pädagogisches Handeln; Verantwortung; Supervision; Protokoll; Analyse
Abstract:
Der Beitrag befasst sich mit dem Phänomen von Delegation und Entgrenzung in der sonderpädagogischen Praxis. Den Kern bildet die rekonstruktionslogische Analyse eines Supervisionsprotokolls. In der Supervision tauschen sich Lehrerinnen über ihre Praxis aus und versuchen Wege aus den geschilderten Krisen zu finden. Das in der Sonderpädagogik verbreitete Instrument der Diagnostik wird dabei oftmals nicht dazu genutzt, zielgenauer pädagogisch mit dem Kind zu arbeiten, sondern es dient dazu, die Delegation des Kindes an eine andere Einrichtung zu legitimieren. Damit verkehrt sich das Instrument in sein Gegenteil.
DIPF-Departments:
Informationszentrum Bildung
Response time-based treatment of omitted responses in computer-based testing
Frey, Andreas; Spoden, Christian; Goldhammer, Frank; Wenzel, S. Franziska C.
Journal Article
| In: Behaviormetrika | 2018
38894 Endnote
Author(s):
Frey, Andreas; Spoden, Christian; Goldhammer, Frank; Wenzel, S. Franziska C.
Title:
Response time-based treatment of omitted responses in computer-based testing
In:
Behaviormetrika, 45 (2018) 2, S. 505-526
DOI:
10.1007/s41237-018-0073-9
Publication Type:
3a. Beiträge in begutachteten Zeitschriften; Beitrag in Sonderheft
Language:
Englisch
Keywords:
Methode; Technologiebasiertes Testen; Antwort; Dauer; Verhalten; Item-Response-Theory; Fehlende Daten; Datenanalyse; Testaufgabe; Typologie; Medienkompetenz; Schülerleistungstest; Testauswertung
Abstract:
A new response time-based method for coding omitted item responses in computer-based testing is introduced and illustrated with empirical data. The new method is derived from the theory of missing data problems of Rubin and colleagues and embedded in an item response theory framework. Its basic idea is using item response times to statistically test for each individual item whether omitted responses are missing completely at random (MCAR) or missing due to a lack of ability and, thus, not at random (MNAR) with fixed type-1 and type-2 error levels. If the MCAR hypothesis is maintained, omitted responses are coded as not administered (NA), and as incorrect (0) otherwise. The empirical illustration draws from the responses given by N = 766 students to 70 items of a computer-based ICT skills test. The new method is compared with the two common deterministic methods of scoring omitted responses as 0 or as NA. In result, response time thresholds from 18 to 58 s were identified. With 61%, more omitted responses were recoded into 0 than into NA (39%). The differences in difficulty were larger when the new method was compared to deterministically scoring omitted responses as NA compared to scoring omitted responses as 0. The variances and reliabilities obtained under the three methods showed small differences. The paper concludes with a discussion of the practical relevance of the observed effect sizes, and with recommendations for the practical use of the new method as a method to be applied in the early stage of data processing. (DIPF/Orig.)
DIPF-Departments:
Bildungsqualität und Evaluation
How to conceptualize, represent, and analyze log data from technology-based assessments? A generic […]
Kroehne, Ulf; Goldhammer, Frank
Journal Article
| In: Behaviormetrika | 2018
38895 Endnote
Author(s):
Kroehne, Ulf; Goldhammer, Frank
Title:
How to conceptualize, represent, and analyze log data from technology-based assessments? A generic framework and an application to questionnaire items
In:
Behaviormetrika, 45 (2018) 2, S. 527-563
DOI:
10.1007/s41237-018-0063-y
Publication Type:
3a. Beiträge in begutachteten Zeitschriften; Beitrag in Sonderheft
Language:
Englisch
Keywords:
Bildungsforschung; Empirische Forschung; Logdatei; Datenanalyse; Technologiebasiertes Testen; PISA <Programme for International Student Assessment>; Fragebogen; Konzeption; Testkonstruktion; Daten; Typologie; Hardware; Antwort; Verhalten; Dauer; Interaktion; Mensch-Maschine-Kommunikation; Indikator
Abstract:
Log data from educational assessments attract more and more attention and large-scale assessment programs have started providing log data as scientific use files. Such data generated as a by-product of computer-assisted data collection has been known as paradata in survey research. In this paper, we integrate log data from educational assessments into a taxonomy of paradata. To provide a generic framework for the analysis of log data, finite state machines are suggested. Beyond its computational value, the specific benefit of using finite state machines is achieved by separating platform-specific log events from the definition of indicators by states. Specifically, states represent filtered log data given a theoretical process model, and therefore, encode the information of log files selectively. The approach is empirically illustrated using log data of the context questionnaires of the Programme for International Student Assessment (PISA). We extracted item-level response time components from questionnaire items that were administered as item batteries with multiple questions on one screen and related them to the item responses. Finally, the taxonomy and the finite state machine approach are discussed with respect to the definition of complete log data, the verification of log data and the reproducibility of log data analyses. (DIPF/Orig.)
DIPF-Departments:
Bildungsqualität und Evaluation
Automatically analyzing text responses for exploring gender‑specific cognitions in PISA reading
Zehner, Fabian; Goldhammer, Frank; Sälzer, Christine
Journal Article
| In: Large-scale Assessments in Education | 2018
38503 Endnote
Author(s):
Zehner, Fabian; Goldhammer, Frank; Sälzer, Christine
Title:
Automatically analyzing text responses for exploring gender‑specific cognitions in PISA reading
In:
Large-scale Assessments in Education, 6 (2018) 6:7
DOI:
10.1186/s40536-018-0060-3
URL:
https://largescaleassessmentsineducation.springeropen.com/articles/10.1186/s40536-018-0060-3
Publication Type:
3a. Beiträge in begutachteten Zeitschriften; Aufsatz (keine besondere Kategorie)
Language:
Englisch
Keywords:
Schüler; Lesefertigkeit; Geschlechtsspezifischer Unterschied; Lesetest; Antwort; Analyse; Kognitive Prozesse; PISA <Programme for International Student Assessment>; Sekundäranalyse; Deutschland
Abstract (english):
Background: The gender gap in reading literacy is repeatedly found in large-scale assessments. This study compared girls' and boys' text responses in a reading test applying natural language processing. For this, a theoretical framework was compiled that allows mapping of response features to the preceding cognitive components such as micro- and macropropositions from the situation model. Methods: In total, n = 33,604 responses from the German sample of the Programme for International Student Assessment (PISA) 2012 reading test have been analyzed for characterizing the genders' typical cognitive approaches. The analyses mainly explored the gender gap by contrasting groups of responses typical for either gender. These gender-specific responses characterize the typical responding of the genders to PISA reading questions. Results: Responses typical for girls contained three to five more proposition entities from the situation model, irrespective of the response correctness. They integrated more relevant propositions and constituted better fits to the question focus. That means, in answering questions which ask for explicit information from the stimulus text, the typical girl responses appropriately encompassed more micropropositions, and typical boy responses tended to include more macropropositions-vice versa for questions requesting implicit information. Conclusion: It appears that typical boy responses to PISA reading questions are characterized by struggling with retrieving and integrating propositions from the situation model. The typical girl liberally juggles these to formulate the responses. The results demonstrate that text responses are a neglected but informative source for educational large-scale assessments made accessible through natural language processing.
DIPF-Departments:
Bildungsqualität und Evaluation
Unselect matches
Select all matches
Export
<
1
2
3
4
...
10
>
Show all
(100)