Menü Überspringen
Contact
Deutsch
English
Not track
Data Protection
Search
Log in
DIPF News
Research
Infrastructures
Institute
Zurück
Contact
Deutsch
English
Not track
Data Protection
Search
Home
>
Research
>
Publications
>
Publications Data Base
Search results in the DIPF database of publications
Your query:
(Schlagwörter: "Validität")
Advanced Search
Search term
Only Open Access
Search
Unselect matches
Select all matches
Export
135
items matching your search terms.
Show all details
Measurement invariance testing in questionnaires. A comparison of three Multigroup-CFA and […]
Buchholz, Janine; Hartig, Johannes
Journal Article
| In: Psychological Test and Assessment Modelling | 2020
39818 Endnote
Author(s):
Buchholz, Janine; Hartig, Johannes
Title:
Measurement invariance testing in questionnaires. A comparison of three Multigroup-CFA and IRT-based approaches
In:
Psychological Test and Assessment Modelling, 62 (2020) 1, S. 29-54
URL:
https://www.psychologie-aktuell.com/fileadmin/Redaktion/Journale/ptam-2020-1/03_Buchholz.pdf
Publication Type:
3a. Beiträge in begutachteten Zeitschriften; Beitrag in Sonderheft
Language:
Englisch
Keywords:
PISA <Programme for International Student Assessment>; Item-Response-Theorie; Faktorenanalyse; Schülerleistung; Leistungsmessung; Messung; Invarianz; Validität; Statistische Methode
Abstract (english):
International Large-Scale Assessments aim at comparisons of countries with respect to latent constructs such as attitudes, values and beliefs. Measurement invariance (MI) needs to hold in order for such comparisons to be valid. Several statistical approaches to test for MI have been proposed: While Multigroup Confirmatory Factor Analysis (MGCFA) is particularly popular, a newer, IRT-based approach was introduced for non-cognitive constructs in PISA 2015, thus raising the question of consistency between these approaches. A total of three approaches (MGCFA for ordinal and continuous data, multi-group IRT) were applied to simulated data containing different types and extents of MI violations, and to the empirical non-cognitive PISA 2015 data. Analyses are based on indices of the magnitude (i.e., parameter-specific modification indices resulting from MGCFA and group-specific item fit statistics resulting from the IRT approach) and direction of local misfit (i.e., standardized parameter change and mean deviation, respectively). Results indicate that all measures were sensitive to (some) MI violations and more consistent in identifying group differences in item difficulty parameters.
DIPF-Departments:
Bildungsqualität und Evaluation
Entwicklung eines Fragebogens zur Erfassung von Überzeugungen Lehramtsstudierender zum Unterrichten […]
Dignath, Charlotte; Meschede, Nicola; Kunter, Mareike; Hardy, Ilonca
Journal Article
| In: Psychologie in Erziehung und Unterricht | 2020
40511 Endnote
Author(s):
Dignath, Charlotte; Meschede, Nicola; Kunter, Mareike; Hardy, Ilonca
Title:
Entwicklung eines Fragebogens zur Erfassung von Überzeugungen Lehramtsstudierender zum Unterrichten in heterogenen Klassen: Befunde zur Kriteriumsvalidität und Veränderungssensitivität
In:
Psychologie in Erziehung und Unterricht, 67 (2020) 3, S. 194-211
DOI:
10.2378/peu2020.art16d
URL:
https://reinhardt-journals.de/index.php/peu/article/view/152742
Publication Type:
3a. Beiträge in begutachteten Zeitschriften; Beitrag in Sonderheft
Language:
Deutsch
Keywords:
Heterogenität; Lehramtsstudent; Schulklasse; Überzeugung; Einstellung <Psy>; Messverfahren; Fragebogen; Konzeption; Validität; Reliabilität; Berufliche Kompetenz; Behindertes Kind; Inklusion; Migrationshintergrund; Leistung; Leistungsdifferenzierung; Schulform; Modell
Abstract:
Es liegen bisher nur wenige Instrumente vor, um Überzeugungen von Lehrpersonen zum Unterrichten in heterogenen Klassen im Sinne eines breiten Inklusionsbegriffs zu untersuchen. In drei Studien wurden die Struktur und die Validität von drei Skalen zu Überzeugungen zum Unterrichten in heterogenen Klassen von Lehramtsstudierenden untersucht. Die Skalen wurden in Anlehnung an das Instrument KIESEL von Bosse und Spörer (2014) zusätzlich zur Heterogenitätsdimension Behinderung (d.h. die gemeinsame Regelbeschulung von Schülerinnen und Schülern mit und ohne sonderpädagogischen Förderbedarf) um die Heterogenitätsdimensionen Kulturelle Heterogenität und Leistungsbezogene Heterogenität erweitert. Im Rahmen mehrerer Validierungsstudien wurde die Sensitivität des Instruments mit Blick auf erwartete Gruppenunterschiede sowie auf die Veränderung durch Instruktion überprüft. Die Skalen können weiterhin zwischen verschiedenen Konstrukten, z.B. Selbstwirksamkeit, diskriminieren.
Abstract (english):
Only few instruments have been developed to assess such beliefs in teachers concerning various dimensions of heterogeneity following the wide definition of inclusion. In three studies we investigated the structure and the validity of three scales on heterogeneity beliefs. The scales were developed based on the instrument KIESEL by Bosse and Spörer (2014), adding two belief scales regarding cultural heterogeneity and performance-based heterogeneity to their scale on inclusive beliefs (i. e., teaching students with and without special educational needs within the same classroom). In several validity studies, we tested the sensitivity of the instrument regarding differences between groups as well as the assessment of beliefs development over time. Furthermore, the scales can discriminate between different constructs, for example, beliefs and self-efficacy.
DIPF-Departments:
Bildungsqualität und Evaluation
Convergent evidence for validity of a performance-based ICT skills test
Engelhardt, Lena; Naumann, Johannes; Goldhammer, Frank; Frey, Andreas; Wenzel, S. Franziska C.; […]
Journal Article
| In: European Journal of Psychological Assessment | 2020
39137 Endnote
Author(s):
Engelhardt, Lena; Naumann, Johannes; Goldhammer, Frank; Frey, Andreas; Wenzel, S. Franziska C.; Hartig, Katja; Horz, Holger
Title:
Convergent evidence for validity of a performance-based ICT skills test
In:
European Journal of Psychological Assessment, 36 (2020) 2, S. 269-279
DOI:
10.1027/1015-5759/a000507
URN:
urn:nbn:de:0111-pedocs-218426
URL:
https://nbn-resolving.org/urn:nbn:de:0111-pedocs-218426
Publication Type:
3a. Beiträge in begutachteten Zeitschriften; Aufsatz (keine besondere Kategorie)
Language:
Englisch
Keywords:
Informationstechnologische Bildung; Informations- und Kommunikationstechnologie; Problemlösen; Kompetenz; Fertigkeit; Schüler; Sekundarstufe I; Test; Testaufgabe; Validität; Evidenz; Deutschland
Abstract (english):
The goal of this study was to investigate sources of evidence of convergent validity supporting the construct interpretation of scores on a simulation-based ICT skills test. The construct definition understands ICT skills as reliant on ICT-specific knowledge as well as comprehension and problem-solving skills. On the basis of this, a validity argument comprising three claims was formulated and tested. (1) In line with the classical nomothetic span approach, all three predictor variables explained task success positively across all ICT skills items. As ICT tasks can vary in the extent to which they require construct-related knowledge and skills and in the way related items are designed and implemented, the effects of construct-related predictor variables were expected to vary across items. (2) A task-based analysis approach revealed that the item-level effects of the three predictor variables were in line with the targeted construct interpretation for most items. (3) Finally, item characteristics could significantly explain the random effect of problem-solving skills, but not comprehension skills. Taken together, the obtained results generally support the validity of the construct interpretation.
DIPF-Departments:
Bildungsqualität und Evaluation
Who sees what? Conceptual considerations on the measurement of teaching quality from different […]
Fauth, Benjamin; Göllner, Richard; Lenske, Gerlinde; Praetorius, Anna-Katharina; Wagner, Wolfgang
Journal Article
| In: Zeitschrift für Pädagogik. Beiheft | 2020
40728 Endnote
Author(s):
Fauth, Benjamin; Göllner, Richard; Lenske, Gerlinde; Praetorius, Anna-Katharina; Wagner, Wolfgang
Title:
Who sees what? Conceptual considerations on the measurement of teaching quality from different perspectives
In:
Zeitschrift für Pädagogik. Beiheft, 66 (2020) , S. 138-155
Publication Type:
3a. Beiträge in begutachteten Zeitschriften; Beitrag in Sonderheft
Language:
Englisch
Keywords:
Empirische Forschung; Unterricht; Qualität; Messung; Schüler; Lehrer; Wahrnehmung; Bewertung; Erhebungsinstrument; Validität; Schülerrolle; Lehrerrolle; Taxonomie; Klassenführung; Vergleich; Fragebogen; Beobachtung; Selbsteinschätzung; Fremdeinschätzung
Abstract:
One puzzling finding in education research is that teachers, students, and external observers agree only marginally on their ratings of teaching quality. In this theoretical contribution, we summarize and reappraise previous findings on agreement between different raters of teaching quality. We explain these findings by thoroughly examining the instruments that have been used to measure teaching quality. Building on this, we propose a reference perspective matrix, which should be useful in explaining perspective-specific rating mechanisms behind responses to certain survey or observation items. The reference perspective matrix could thus afford a theoretical foundation for future studies on the assessment of teaching quality. (DIPF/Orig.)
DIPF-Departments:
Bildungsqualität und Evaluation
Rapid guessing rates across administration mode and test setting
Kröhne, Ulf; Deribo, Tobias; Goldhammer, Frank
Journal Article
| In: Psychological Test and Assessment Modeling | 2020
40317 Endnote
Author(s):
Kröhne, Ulf; Deribo, Tobias; Goldhammer, Frank
Title:
Rapid guessing rates across administration mode and test setting
In:
Psychological Test and Assessment Modeling, 62 (2020) 2, S. 144-177
DOI:
10.25656/01:23630
URN:
urn:nbn:de:0111-pedocs-236307
URL:
https://nbn-resolving.org/urn:nbn:de:0111-pedocs-236307
Publication Type:
3a. Beiträge in begutachteten Zeitschriften; Beitrag in Sonderheft
Language:
Englisch
Keywords:
Test; Bewertung; Innovation; Validität; Technologiebasiertes Testen; Design; Testkonstruktion; Testverfahren; Wirkung; Verhalten; Logdatei; Experiment; Student; Vergleichsuntersuchung
Abstract (english):
Rapid guessing can threaten measurement invariance and the validity of large-scale assessments, which are often conducted under low-stakes conditions. Comparing measures collected under different administration modes or in different test settings necessitates that rapid guessing rates also be comparable. Response time thresholds can be used to identify rapid guessing behavior. Using data from an experiment embedded in an assessment of university students as part of the National Educational Panel Study (NEPS), we show that rapid guessing rates can differ across modes. Specifically, rapid guessing rates are found to be higher for un-proctored individual online assessment. It is also shown that rapid guessing rates differ across different groups of students and are related to properties of the test design. No relationship between dropout behavior and rapid guessing rates was found. (DIPF/Orig.)
DIPF-Departments:
Bildungsqualität und Evaluation
Conceptual and methodological challenges in detecting the effectiveness of learning and teaching
Naumann, Alexander; Kuger, Susanne; Köhler, Carmen; Hochweber, Jan
Journal Article
| In: Zeitschrift für Pädagogik. Beiheft | 2020
40254 Endnote
Author(s):
Naumann, Alexander; Kuger, Susanne; Köhler, Carmen; Hochweber, Jan
Title:
Conceptual and methodological challenges in detecting the effectiveness of learning and teaching
In:
Zeitschrift für Pädagogik. Beiheft, 66 (2020) , S. 179-196
Publication Type:
3a. Beiträge in begutachteten Zeitschriften; Beitrag in Sonderheft
Language:
Englisch
Keywords:
Effektivität; Unterricht; Lernen; Wirkung; Schülerleistung; Messverfahren; Konzeption; Modellierung; Unterrichtsprozess; Leistungsmessung; Validität; Methodologie
Abstract:
One major goal of research on educational effectiveness is to detect the effects of teaching and learning. Reliably detecting the effects of teaching and learning requires the identification and adequate measurement of (a) the relevant classroom processes and (b) outcomes on the student and the classroom level and also (c) modeling the link between both. The present paper aims to identify and discuss current conceptual and methodological challenges in regard to making inferences on the effectiveness of teaching and learning. We give a brief overview of current practices, discuss key quality criteria with respect to these three aspects, and identify areas in need of further development. (DIPF/Orig.)
DIPF-Departments:
Bildungsqualität und Evaluation
Entwicklung und Skalierung eines Tests zur Erfassung des Verständnisses multipler Dokumente von […]
Schoor, Cornelia; Hahnel, Carolin; Artelt, Cordula; Reimann, Daniel; Kroehne, Ulf; Goldhammer, Frank
Journal Article
| In: Diagnostica | 2020
40128 Endnote
Author(s):
Schoor, Cornelia; Hahnel, Carolin; Artelt, Cordula; Reimann, Daniel; Kroehne, Ulf; Goldhammer, Frank
Title:
Entwicklung und Skalierung eines Tests zur Erfassung des Verständnisses multipler Dokumente von Studierenden
In:
Diagnostica, 66 (2020) 2, S. 123-135
DOI:
10.1026/0012-1924/a000231
URN:
urn:nbn:de:0111-pedocs-218434
URL:
https://nbn-resolving.org/urn:nbn:de:0111-pedocs-218434
Publication Type:
3a. Beiträge in begutachteten Zeitschriften; Aufsatz (keine besondere Kategorie)
Language:
Deutsch
Keywords:
Testkonstruktion; Student; Messung; Textverständnis; Quelle; Inhalt; Dokument; Diagnostischer Test; Kompetenz; Datenerfassung; Datenanalyse; Modell; Skalierung; Validität
Abstract:
Das Verständnis multipler Dokumente (Multiple Document Comprehension, MDC) wird als Fähigkeit verstanden, aus verschiedenen Informationsquellen eine integrierte Repräsentation eines inhaltlichen Gegenstandsbereichs zu konstruieren. Als solche ist sie sowohl für die erfolgreiche Bewältigung eines Studiums als auch für gesellschaftliche Partizipation eine wichtige Kompetenz. Bislang gibt es jedoch kein etabliertes Diagnostikum in diesem Bereich. Um diese Lücke zu schließen, wurde ein Test entwickelt, der vier zentrale kognitive Anforderungen von MDC abdeckt und auf Basis der Daten von 310 Studierenden sozial- und geisteswissenschaftlicher Fächer überprüft wurde. Die im MDC-Test gemessene Kompetenz erwies sich als eindimensional. Der MDC-Testwert wies theoriekonforme Zusammenhänge mit der Abiturnote, dem Studienabschnitt und der Leistung in einer Essay-Aufgabe auf. Insgesamt liefern die Ergebnisse empirische Belege dafür, dass der Testwert aus dem MDC-Test die fächerübergreifende Fähigkeit von Studierenden wiedergibt, multiple Dokumente zu verstehen. (DIPF/Orig.)
Abstract (english):
Multiple document comprehension (MDC) is defined as the ability to construct an integrated representation based on different sources of information on a particular topic. It is an important competence for both the successful accomplishment of university studies and participation in societal discussions. Yet, there is no established assessment instrument for MDC. Therefore, we developed a test covering four theory-based cognitive requirements of MDC. Based on the data of 310 university students of social sciences and humanities, the MDC test proved to be a unidimensional measure. Furthermore, the test score was related to the final school exam grade, the study level (bachelor / master), and the performance in an essay task. The empirical results suggest that the score of the MDC test can be interpreted as the generic competence of university students to understand multiple documents. (DIPF/Orig.)
DIPF-Departments:
Bildungsqualität und Evaluation
Assessing the validity of a learning analytics expectation instrument. A multinational study
Whitelock-Wainwright, Alexander; Gašević, Dragan; Tsai, Yi-Shan; Drachsler, Hendrik; […]
Journal Article
| In: Journal of Computer Assisted Learning | 2020
39876 Endnote
Author(s):
Whitelock-Wainwright, Alexander; Gašević, Dragan; Tsai, Yi-Shan; Drachsler, Hendrik; Scheffel, Maren; Muñoz-Merino, Pedro J.; Tammets, Kairit; Delgado Kloos, Carlos
Title:
Assessing the validity of a learning analytics expectation instrument. A multinational study
In:
Journal of Computer Assisted Learning, 36 (2020) 2, S. 209-240
DOI:
10.1111/jcal.12401
URL:
https://onlinelibrary.wiley.com/doi/abs/10.1111/jcal.12401?af=R
Publication Type:
3a. Beiträge in begutachteten Zeitschriften; Aufsatz (keine besondere Kategorie)
Language:
Englisch
Keywords:
Lernprozess; Analyse; Implementierung; Universität; Student; Erwartung; Ethik; Validität; Fragebogen; Erhebungsinstrument; Übersetzung; Lernen; Selbstregulation; Datenanalyse; Faktorenanalyse; Estland; Spanien; Niederlande
Abstract:
To assist higher education institutions in meeting the challenge of limited student engagement in the implementation of Learning Analytics services, the Questionnaire for Student Expectations of Learning Analytics (SELAQ) was developed. This instrument contains 12 items, which are explained by a purported two‐factor structure of "Ethical and Privacy Expectations" and "Service Feature Expectations." As it stands, however, the SELAQ has only been validated with students from UK university, which is problematic on account of the interest in Learning Analytics extending beyond this context. Thus, the aim of the current work was to assess whether the translated SELAQ can be validated in three contexts (an Estonian, a Spanish, and a Dutch University). The findings show that the model provided acceptable fits in both the Spanish and Dutch samples, but was not supported in the Estonian student sample. In addition, an assessment of local fit is undertaken for each sample, which provides important points that need to be considered in future work. Finally, a general comparison of expectations across contexts is undertaken, which are discussed in relation to the General Data Protection Regulation (2018). (DIPF/Orig.)
DIPF-Departments:
Informationszentrum Bildung
Cross-cultural comparability and validity of metacognitive knowledge in reading in PISA 2009. A […]
Zhou, Ji; He, Jia; Lafontaine, Dominique
Journal Article
| In: Assessment in Education | 2020
40612 Endnote
Author(s):
Zhou, Ji; He, Jia; Lafontaine, Dominique
Title:
Cross-cultural comparability and validity of metacognitive knowledge in reading in PISA 2009. A comparison of two scoring methods
In:
Assessment in Education, 27 (2020) 6, S. 635-654
DOI:
10.1080/0969594X.2020.1828820
URN:
urn:nbn:de:0111-pedocs-221384
URL:
https://nbn-resolving.org/urn:nbn:de:0111-pedocs-221384
Publication Type:
3a. Beiträge in begutachteten Zeitschriften; Aufsatz (keine besondere Kategorie)
Language:
Englisch
Keywords:
Metakognition; Wissen; Messung; Lesen; PISA <Programme for International Student Assessment>; Interkultureller Vergleich; Datenanalyse; Sekundäranalyse; Validität; Vergleich; Likert-Skala; OECD-Länder
Abstract:
Accurate measurement of metacognitive knowledge in reading is important. Different instruments and scoring methods have been proposed but not systematically compared for their measurement comparability across cultures and validity. Given student data from 34 OECD countries in the Programme for International Student Assessment (PISA) in 2009, we compared two scoring methods for metacognitive knowledge in reading based on pair-wise comparisons of strategies and with conventional Likert-scale responses of selected items. Metacognitive knowledge scored with conventional Likert-scale responses demonstrated higher cross-cultural comparability than the pair-wise comparison method. Linked with reading competence, motivation and control strategy in reading, scores from the two scoring methods showed differential criterion validity, possibly related to the types of tasks (understanding and remembering versus summarising), item content (complexity and discrimination between preferred strategies in reading) and common method variance (e.g., individuals' stable response style in rating scales). Theoretical and methodological implications are discussed. (DIPF/Orig.)
DIPF-Departments:
Bildungsqualität und Evaluation
Methodological challenges of international student assessment
Frey, Andreas; Hartig, Johannes
Book Chapter
| Aus: Harju-Luukkainen, Heidi; McElvany, Nele; Stang, Justine (Hrsg.): Monitoring student achievement in the 21st century: European policy perspectives and assessment strategies | Cham: Springer | 2020
40524 Endnote
Author(s):
Frey, Andreas; Hartig, Johannes
Title:
Methodological challenges of international student assessment
In:
Harju-Luukkainen, Heidi; McElvany, Nele; Stang, Justine (Hrsg.): Monitoring student achievement in the 21st century: European policy perspectives and assessment strategies, Cham: Springer, 2020 , S. 39-49
DOI:
10.1007/978-3-030-38969-7_4
URL:
https://link.springer.com/chapter/10.1007/978-3-030-38969-7_4
Publication Type:
4. Beiträge in Sammelwerken; Sammelband (keine besondere Kategorie)
Language:
Englisch
Keywords:
Schülerleistungstest; Leistungsmessung; Internationaler Vergleich; Methodologie; Herausforderung; Veränderung; Schülerleistung; Heterogenität; Adaptives Testen; Befragung; Daten; Open Science; Validität
Abstract (english):
International large-scale assessments are very successful. One key factor of this success is their rigorous methodological and psychometric basis. Because education systems worldwide are subject to rapid changes, international large-scale assessments need to evolve as well. We describe five current methodological challenges that should be addressed so that large-scale assessments can continue to provide highly useful information on educational outcomes in the future. First, new or changed constructs should be adopted, and constructs with declining importance should be dropped from the assessments. Second, the heterogeneity of student performance within and between countries should be better accounted for. This can be achieved by completing the introduction of computerized adaptive testing into international large-scale assessments and making full use of computers to optimise the testing and scaling process. Third, more analytical effort should be invested in the measurement and modelling of context variables, mainly by applying latent variable.
DIPF-Departments:
Bildungsqualität und Evaluation
Unselect matches
Select all matches
Export
<
1
2
3
4
...
14
>
Show all
(135)