DIPF database of publications

Detailansicht Treffer

DIPF database of publications

Show results

Author
Goldhammer, Frank (Hrsg.); Scherer, Ronny (Hrsg.); Greiff, Samuel (Hrsg.):

Title:
Advancements in technology-based assessment
Emerging item formats, test designs, and data sources

Source:
Lausanne : Frontiers Media (2020)

URL of full text:
https://www.frontiersin.org/research-topics/7841/advancements-in-technology-based-assessment-emerging-item-formats-test-designs-and-data-sources

Language:
Englisch

Document type
2. Herausgeberschaft; Zeitschriftensonderheft

Schlagwörter:
Technologiebasiertes Testen, Item, Test, Design, Auswertung, Automatisierung, Prozessdatenverarbeitung, Lernen, Bewertung


Abstract(englisch):
Technology has become an indispensable tool for educational and psychological assessment in today's world. Researchers and large-scale assessment programs alike are increasingly using digital technology (e.g., laptops, tablets, and smartphones) to collect behavioral data beyond the mere idea of responses as correct. Along these lines, technology innovates and enhances assessments in terms of item and test design, methods of test delivery, data collection and analysis, as well as the reporting of test results. The aim of this Research Topic is to present recent advancements in technology-based assessment. Our focus is on cognitive assessments, including the measurement of abilities, competencies, knowledge, and skills but may also include non-cognitive aspects of the assessment. In the area of (cognitive) assessments the innovations driven by technology are manifold: Digital assessments facilitate the creation of new types of stimuli and response formats that were out of reach for assessments using paper; for instance, interactive simulations including multimedia elements, as well as virtual or augmented realities which serve as the task environment. Moreover, technology allows the automated generation of items based on specific item models. Such items can be assembled into tests in a more flexible way than that offered by paper-and-pencil tests and could even be created on the fly; for instance, tailoring item difficulty to individual ability (adaptive testing), while assuring that multiple content constraints are met. As a requirement for adaptive testing or to lower the burden of raters coding item responses manually, computers enable the automatic scoring of constructed responses; for instance, text responses can be scored automatically by using natural language processing and text mining. Technology-based assessments provide not only response data (e.g., correct vs. incorrect responses) but also process data (e.g., frequencies and sequences of test-taking strategies, including navigation behavior) which reflects the course of solving a test item. Process data has been used successfully, among others, to evaluate the data quality, to define process-oriented constructs, to improve measurement precision, and to address substantial research questions. We expect the contributions of this Research Topic to build on this research by considering how technology can further improve, and enhance, educational and psychological assessment. Regarding educational testing, both research papers on the assessment of learning (e.g., summative assessment of learning outcomes) and on the assessment for learning (e.g., formative assessment to support the learning process) are welcome. We expect submissions of empirical papers that present and evaluate innovative technology-based assessment approaches, as well as new applications or illustrations of already existing approaches. We are also interested in papers addressing the validity of test scores and other indicators obtained from innovative assessment procedures.


DIPF-Departments:
Educational Quality and Evaluation

Notes: