InSe – Instructional Sensitivity of Test Items
The InSe-Project investigates the extent to which tests and test items are capable of capturing effects of instruction.
Researchers as well as policy-makers regularly rely on student performance data to draw inferences on schools, teachers or teaching. Yet, valid inferences drawn from student test scores require that instruments are sensitive to the instruction that students have received in class. Accordingly, measures of test and test items’ instructional sensitivity may provide empirical support for validity claims about the inferences on instruction derived from student test scores.
The InSe-Project focuses on two questions:
(a) how can the instructional sensitivity of tests and items be measured empirically?
(b) how can information on instructional sensitivity be used in educational and psychological testing?
The project builds on a longitudinal multilevel DIF-model (LML-DIFmodel) for evaluating instructional sensitivity (Naumann, Hochweber, & Hartig, 2014). The LML-DIF model integrates current approaches for measuring the instructional sensitivity of items by considering both differences between time points of measurement as well as differences between learning groups (i.e. classes).
Four major objectives are pursued:
- further development and extension of the LML-DIF model
- examination of methods and conditions of parameter estimation
- the validation of the statistical indicators of instructional sensitivity
- development of criteria to classify the instructional sensitivity on the test and the item level
To achieve these objectives, two specific work packages are necessary. The first package includes the model development, simulation studies and secondary analyses, while the second work package relates to an empirical study in Swiss Schools.
The cooperation project "Instructional Sensitivity of Test Items” of the DIPF | Leibniz Institute for Research and Information in Education and the University of Teacher Education St. Gallen (PHSG) is funded by the German Research Foundation (DFG) and the Swiss National Science Foundation (SNF).
- Min Li, University of Washington, Seattle
- Naumann, A., Rieser, S., Musow, S., Hochweber, J., & Hartig, J. (2019). Sensitivity of Test Items to Teaching Quality. Learning and Instruction, 60, 41–53. DOI: https://doi.org/10.1016/j.learninstruc.2018.11.002
- Naumann, A., Musow, S., Aichele, C., Hochweber, J., & Hartig, J. (2018): Instruktionssensitivität von Tests und Items [Instructional Sensitivity of Tests and Items]. Zeitschrift für Erziehungswissenschaft. Online first. DOI: 10.1007/s11618-018-0832-0
- Naumann, A., Hartig, J., & Hochweber, J. (2017). Absolute and relative measures of instructional sensitivity. Journal of Educational and Behavioral Statistics. Advance Online Publication. DOI: 10.3102//1076998617703649
- Naumann, A., Hochweber, J., & Klieme, E. (2016). A psychometric framework for the evaluation of instructional sensitivity. Educational Assessment, 21(2), S. 89–101, DOI: 10.1080/10627197.2016.1167591
- Naumann, A., Hochweber, J., & Hartig, J. (2014). Modeling instructional sensitivity using a longitudinal multilevel differential item functioning approach. Journal of Educational Measurement, 51, S. 381–399, DOI: 10.1111/jedm.12051
- Stephanie Musow
- Dr. Alexander Naumann
|Department:||Teacher and Teaching Quality|
04/2015 – 09/2018