Logo: Deutsches Institut für Internationale Pädagogische Forschung

Publications

Publikationendatenbank

show results

Autor:
Köhler, Carmen:

Titel:
Isn't something missing? Latent variable models accounting for item nonresponse

Quelle:
Berlin : Freie Universität Berlin (2017)

URL des Volltextes:
http://www.diss.fu-berlin.de/diss/receive/FUDISS_thesis_000000103203

URN:
urn:nbn:de:kobv:188-fudissthesis000000103203-8

Sprache:
Englisch

Dokumenttyp:
1. Monographien (Autorenschaft); Monographie

Schlagwörter:
Empirische Forschung, Evaluation, Fehlende Daten, Item-Response-Theory, Kompetenz, Leistungsmessung, Modell, Schülerleistung, Schülerleistungstest, Statistische Methode, Testauswertung


Abstract(original):
Item nonresponse in competence tests pose a threat to a valid and reliable competence measurement, especially if the missing values occur systematically and relate to the unobserved response. This is often the case in the context of large-scale assessments, where the failure to respond to an item relates to examinee ability. Researchers developed methods that consider the dependency between ability and item nonresponse by incorporating a model for the process that causes missing values into the measurement model for ability. These model-based approaches seem very promising and might prove superior to common missing data approaches, which typically fail at taking the dependency between ability and nonresponse into account. Up to this point, the approaches have barely been investigated in terms of applicability and performance with regard to the scaling of competence tests in large-scale assessments. The current dissertation bridges the gap between these theoretically postulated models and their possible implementation in the context of large-scale assessments. It aims at (1) testing the applicability of model-based approaches to competence test data, and (2) evaluating whether and under what missing data conditions these approaches are superior to common missing data approaches. Three research studies were conducted for this purpose. Study 1 investigated the assumptions of model-based approaches, whether they hold in empirical practice, and how violations to those assumptions affect individual person parameters. Study 2 focused on features of examinees' nonresponse behavior, such as its stability across different competence tests and how it relates to other examinee characteristics. Study 3 examined the performance of model-based approaches compared to other approaches. Results demonstrate that model-based approaches can be applied to large-scale assessment data, though slight extensions of the models might enhance accuracy in parameter estimates. Further, persons' tendencies not to respond can be considered person-specific attributes, which are relatively constant across different competence tests and also relate to other stable person characteristics. Findings from the third study confirmed the superiority of the model-based approaches compared to common missing data approaches, although a model that simply ignores missing values also led to acceptable results. Model-based approaches show serval advantages over common missing data approaches. Considering their complexity, however, the benefits and drawbacks from different methods need to be weighed. Important issues in the debate on an appropriate scaling method concern model complexity, consequences on examinees' test-taking behavior, and precision of parameter estimates. For many large-scale assessments, a change in the missing data treatment is clearly necessary. Whether model-based approaches will replace former methods is yet to be determined. They certainly count amongst the most advanced methods to handle missing values in the scaling of competence tests. (DIPF/Orig.)


DIPF-Abteilung:
Bildungsqualität und Evaluation

Notizen:

last modified Nov 11, 2016