Menü Überspringen
Search
de
|
en
Contact
Not track
Data Protection
Log in
DIPF News
Research
Knowledge Resources
Networks
Institute
Zurück
de
|
en
Contact
Search
Home
>
Research
>
Publications
>
Publications Data Base
Search results in the DIPF database of publications
Your query:
(Schlagwörter: "Technologie")
Advanced Search
Search term
Only Open Access
Search
Unselect matches
Select all matches
Export
115
items matching your search terms.
Show all details
On the speed sensitivity parameter in the lognormal model for response times. Implications for test […]
Becker, Benjamin; Debeer, Dries; Weirich, Sebastian; Goldhammer, Frank
Journal Article
| In: Applied Psychological Measurement | 2021
42009 Endnote
Author(s):
Becker, Benjamin; Debeer, Dries; Weirich, Sebastian; Goldhammer, Frank
Title:
On the speed sensitivity parameter in the lognormal model for response times. Implications for test assembly
In:
Applied Psychological Measurement, 45 (2021) 6, S. 407-422
DOI:
10.1177/01466216211008530
URL:
https://journals.sagepub.com/doi/abs/10.1177/01466216211008530
Publication Type:
3a. Beiträge in begutachteten Zeitschriften; Aufsatz (keine besondere Kategorie)
Language:
Englisch
Keywords:
Software; Technologiebasiertes Testen; Messverfahren; Item-Response-Theory; Leistungstest; Frage; Antwort; Dauer; Einflussfaktor; Testkonstruktion; Modell; Vergleich; Testtheorie; Simulation
Abstract:
In high-stakes testing, often multiple test forms are used and a common time limit is enforced. Test fairness requires that ability estimates must not depend on the administration of a specific test form. Such a requirement may be violated if speededness differs between test forms. The impact of not taking speed sensitivity into account on the comparability of test forms regarding speededness and ability estimation was investigated. The lognormal measurement model for response times by van der Linden was compared with its extension by Klein Entink, van der Linden, and Fox, which includes a speed sensitivity parameter. An empirical data example was used to show that the extended model can fit the data better than the model without speed sensitivity parameters. A simulation was conducted, which showed that test forms with different average speed sensitivity yielded substantial different ability estimates for slow test takers, especially for test takers with high ability. Therefore, the use of the extended lognormal model for response times is recommended for the calibration of item pools in high-stakes testing situations. Limitations to the proposed approach and further research questions are discussed. (DIPF/Orig.)
Abstract (english):
In high-stakes testing, often multiple test forms are used and a common time limit is enforced. Test fairness requires that ability estimates must not depend on the administration of a specific test form. Such a requirement may be violated if speededness differs between test forms. The impact of not taking speed sensitivity into account on the comparability of test forms regarding speededness and ability estimation was investigated. The lognormal measurement model for response times by van der Linden was compared with its extension by Klein Entink, van der Linden, and Fox, which includes a speed sensitivity parameter. An empirical data example was used to show that the extended model can fit the data better than the model without speed sensitivity parameters. A simulation was conducted, which showed that test forms with different average speed sensitivity yielded substantial different ability estimates for slow test takers, especially for test takers with high ability. Therefore, the use of the extended lognormal model for response times is recommended for the calibration of item pools in high-stakes testing situations. Limitations to the proposed approach and further research questions are discussed. (DIPF/Orig.)
DIPF-Departments:
Lehr und Lernqualität in Bildungseinrichtungen
Adoption of learning technologies in times of pandemic crisis
Drachsler, Hendrik; Jansen, Jeroen; Kirschner, Paul A.
Journal Article
| In: Journal of Computer Assisted Learning | 2021
42316 Endnote
Author(s):
Drachsler, Hendrik; Jansen, Jeroen; Kirschner, Paul A.
Title:
Adoption of learning technologies in times of pandemic crisis
In:
Journal of Computer Assisted Learning, 37 (2021) 6, S. 1509-1512
DOI:
10.1111/jcal.12626
URL:
https://onlinelibrary.wiley.com/doi/10.1111/jcal.12626
Publication Type:
3a. Beiträge in begutachteten Zeitschriften; Bibliografien/Rezensionen u.ä. (z.B. Linktipps)
Language:
Englisch
Keywords:
Pandemie; Lernen; Unterricht; Digitalisierung; Online; Lernumgebung; Informationstechnologie
DIPF-Departments:
Informationszentrum Bildung
Advancements in technology-based assessment. Emerging item formats, test designs, and data sources
Goldhammer, Frank; Scherer, Ronny; Greiff, Samuel (Hrsg.)
Compilation Book
| Lausanne: Frontiers Media | 2020
39802 Endnote
Editor(s)
Goldhammer, Frank; Scherer, Ronny; Greiff, Samuel
Title:
Advancements in technology-based assessment. Emerging item formats, test designs, and data sources
Published:
Lausanne: Frontiers Media, 2020 (Frontiers in Psychology. Sonderheft)
DOI:
10.3389/fpsyg.2019.03047
URL:
https://www.frontiersin.org/research-topics/7841/advancements-in-technology-based-assessment-emerging-item-formats-test-designs-and-data-sources
Publication Type:
2. Herausgeberschaft; Zeitschriftensonderheft
Language:
Englisch
Keywords:
Technologiebasiertes Testen; Item; Test; Design; Auswertung; Automatisierung; Prozessdatenverarbeitung; Lernen; Bewertung
Abstract (english):
Technology has become an indispensable tool for educational and psychological assessment in today's world. Researchers and large-scale assessment programs alike are increasingly using digital technology (e.g., laptops, tablets, and smartphones) to collect behavioral data beyond the mere idea of responses as correct. Along these lines, technology innovates and enhances assessments in terms of item and test design, methods of test delivery, data collection and analysis, as well as the reporting of test results. The aim of this Research Topic is to present recent advancements in technology-based assessment. Our focus is on cognitive assessments, including the measurement of abilities, competencies, knowledge, and skills but may also include non-cognitive aspects of the assessment. In the area of (cognitive) assessments the innovations driven by technology are manifold: Digital assessments facilitate the creation of new types of stimuli and response formats that were out of reach for assessments using paper; for instance, interactive simulations including multimedia elements, as well as virtual or augmented realities which serve as the task environment. Moreover, technology allows the automated generation of items based on specific item models. Such items can be assembled into tests in a more flexible way than that offered by paper-and-pencil tests and could even be created on the fly; for instance, tailoring item difficulty to individual ability (adaptive testing), while assuring that multiple content constraints are met. As a requirement for adaptive testing or to lower the burden of raters coding item responses manually, computers enable the automatic scoring of constructed responses; for instance, text responses can be scored automatically by using natural language processing and text mining. Technology-based assessments provide not only response data (e.g., correct vs. incorrect responses) but also process data (e.g., frequencies and sequences of test-taking strategies, including navigation behavior) which reflects the course of solving a test item. Process data has been used successfully, among others, to evaluate the data quality, to define process-oriented constructs, to improve measurement precision, and to address substantial research questions. We expect the contributions of this Research Topic to build on this research by considering how technology can further improve, and enhance, educational and psychological assessment. Regarding educational testing, both research papers on the assessment of learning (e.g., summative assessment of learning outcomes) and on the assessment for learning (e.g., formative assessment to support the learning process) are welcome. We expect submissions of empirical papers that present and evaluate innovative technology-based assessment approaches, as well as new applications or illustrations of already existing approaches. We are also interested in papers addressing the validity of test scores and other indicators obtained from innovative assessment procedures.
DIPF-Departments:
Bildungsqualität und Evaluation
Convergent evidence for validity of a performance-based ICT skills test
Engelhardt, Lena; Naumann, Johannes; Goldhammer, Frank; Frey, Andreas; Wenzel, S. Franziska C.; […]
Journal Article
| In: European Journal of Psychological Assessment | 2020
39137 Endnote
Author(s):
Engelhardt, Lena; Naumann, Johannes; Goldhammer, Frank; Frey, Andreas; Wenzel, S. Franziska C.; Hartig, Katja; Horz, Holger
Title:
Convergent evidence for validity of a performance-based ICT skills test
In:
European Journal of Psychological Assessment, 36 (2020) 2, S. 269-279
DOI:
10.1027/1015-5759/a000507
URL:
https://econtent.hogrefe.com/doi/10.1027/1015-5759/a000507
Publication Type:
3a. Beiträge in begutachteten Zeitschriften; Aufsatz (keine besondere Kategorie)
Language:
Englisch
Keywords:
Informationstechnologische Bildung; Informations- und Kommunikationstechnologie; Problemlösen; Kompetenz; Fertigkeit; Schüler; Sekundarstufe I; Test; Testaufgabe; Validität; Evidenz; Deutschland
Abstract (english):
The goal of this study was to investigate sources of evidence of convergent validity supporting the construct interpretation of scores on a simulation-based ICT skills test. The construct definition understands ICT skills as reliant on ICT-specific knowledge as well as comprehension and problem-solving skills. On the basis of this, a validity argument comprising three claims was formulated and tested. (1) In line with the classical nomothetic span approach, all three predictor variables explained task success positively across all ICT skills items. As ICT tasks can vary in the extent to which they require construct-related knowledge and skills and in the way related items are designed and implemented, the effects of construct-related predictor variables were expected to vary across items. (2) A task-based analysis approach revealed that the item-level effects of the three predictor variables were in line with the targeted construct interpretation for most items. (3) Finally, item characteristics could significantly explain the random effect of problem-solving skills, but not comprehension skills. Taken together, the obtained results generally support the validity of the construct interpretation.
DIPF-Departments:
Bildungsqualität und Evaluation
Evaluation of online information in university students. Development and scaling of the screening […]
Hahnel, Carolin; Eichmann, Beate; Goldhammer, Frank
Journal Article
| In: Frontiers in Psychology | 2020
40881 Endnote
Author(s):
Hahnel, Carolin; Eichmann, Beate; Goldhammer, Frank
Title:
Evaluation of online information in university students. Development and scaling of the screening instrument EVON
In:
Frontiers in Psychology, (2020) , S. 11:562128
DOI:
10.3389/fpsyg.2020.562128
URN:
urn:nbn:de:0111-pedocs-232241
URL:
https://nbn-resolving.org/urn:nbn:de:0111-pedocs-232241
Publication Type:
3a. Beiträge in begutachteten Zeitschriften; Aufsatz (keine besondere Kategorie)
Language:
Englisch
Keywords:
Deutschland; Internet; Informationskompetenz; Ressource; Glaubwürdigkeit; Relevanz; Bewertung; Test; Testentwicklung; Itemanalyse; Suchmaschine; Simulation; Technologiebasiertes Testen; Interview; Erhebungsinstrument; Evaluation; Student; Rasch-Modell; Empirische Untersuchung;
Abstract:
As Internet sources provide information of varying quality, it is an indispensable prerequisite skill to evaluate the relevance and credibility of online information. Based on the assumption that competent individuals can use different properties of information to assess its relevance and credibility, we developed the EVON (evaluation of online information), an interactive computer-based test for university students. The developed instrument consists of eight items that assess the skill to evaluate online information in six languages. Within a simulated search engine environment, students are requested to select the most relevant and credible link for a respective task. To evaluate the developed instrument, we conducted two studies: (1) a pre-study for quality assurance and observing the response process (cognitive interviews of n = 8 students) and (2) a main study aimed at investigating the psychometric properties of the EVON and its relation to other variables (n = 152 students). The results of the pre-study provided first evidence for a theoretically sound test construction with regard to students' item processing behavior. The results of the main study showed acceptable psychometric outcomes for a standardized screening instrument with a small number of items. The item design criteria affected the item difficulty as intended, and students' choice to visit a website had an impact on their task success. Furthermore, the probability of task success was positively predicted by general cognitive performance and reading skill. Although the results uncovered a few weaknesses (e.g., a lack of difficult items), and the efforts of validating the interpretation of EVON outcomes still need to be continued, the overall results speak in favor of a successful test construction and provide first indication that the EVON assesses students' skill in evaluating online information in search engine environments. (DIPF/Orig.)
DIPF-Departments:
Bildungsqualität und Evaluation
Interpretieren im Kontext virtueller Forschungsumgebungen - zu den Potentialen und Grenzen einer […]
Kminek, Helge; Meier, Michael; Schindler, Christoph; Hocker, Julian; Veja, Cornelia
Journal Article
| In: Zeitschrift für qualitative Forschung | 2020
40949 Endnote
Author(s):
Kminek, Helge; Meier, Michael; Schindler, Christoph; Hocker, Julian; Veja, Cornelia
Title:
Interpretieren im Kontext virtueller Forschungsumgebungen - zu den Potentialen und Grenzen einer virtuellen Forschungsumgebung und ihres Einsatzes in der akademischen Lehre
In:
Zeitschrift für qualitative Forschung, 21 (2020) 2, S. 185-198
DOI:
10.3224/zqf.v21i2.03
URL:
https://www.budrich-journals.de/index.php/zqf/article/view/36570
Publication Type:
3a. Beiträge in begutachteten Zeitschriften; Beitrag in Sonderheft
Language:
Deutsch
Keywords:
Hermeneutik; Objektivität; Methode; Forschungsprozess; Digitalisierung; Computerunterstütztes Verfahren; Datenanalyse; Interpretation; Tool; Softwaretechnologie; Erziehungswissenschaft; Hochschullehre; Methodologie; Kasuistik
Abstract:
In dem Beitrag wird eine für die Methode der Objektiven Hermeneutik entwickelte virtuelle Forschungsumgebung vorgestellt. Mit Blick auf die Forschungsbefunde zum Einsatz von Objektiver Hermeneutik in der akademischen Lehre werden Voraussetzungen und Potentiale für die Einsatzmöglichkeiten einer virtuellen Forschungsumgebung diskutiert. Dies erfolgt unter anderem am Beispiel eines Lehrforschungsseminars, dessen erste Ergebnisse die These stützen, dass die hier vorgestellte virtuelle Forschungsumgebung eine sinnvolle Ergänzung der universitären Methodenlehre darstellen kann. (DIPF/Orig.)
Abstract (english):
The article presents the virtual research environment developed for the method of Objective Hermeneutics. With regard to the research findings on the use of objective hermeneutics in academic teaching, the requirements and potential for the possible uses of a virtual research environment are discussed. The discussion is based, among other things, on the example of a teaching research seminar, the initial results of which support the thesis that the virtual research environment presented here can be a useful supplement to university methodology, especially for Objective Hermeneutics. (DIPF/Orig.)
DIPF-Departments:
Informationszentrum Bildung
Rapid guessing rates across administration mode and test setting
Kröhne, Ulf; Deribo, Tobias; Goldhammer, Frank
Journal Article
| In: Psychological Test and Assessment Modeling | 2020
40317 Endnote
Author(s):
Kröhne, Ulf; Deribo, Tobias; Goldhammer, Frank
Title:
Rapid guessing rates across administration mode and test setting
In:
Psychological Test and Assessment Modeling, 62 (2020) 2, S. 144-177
DOI:
10.25656/01:23630
URN:
urn:nbn:de:0111-pedocs-236307
URL:
https://nbn-resolving.org/urn:nbn:de:0111-pedocs-236307
Publication Type:
3a. Beiträge in begutachteten Zeitschriften; Beitrag in Sonderheft
Language:
Englisch
Keywords:
Test; Bewertung; Innovation; Validität; Technologiebasiertes Testen; Design; Testkonstruktion; Testverfahren; Wirkung; Verhalten; Logdatei; Experiment; Student; Vergleichsuntersuchung
Abstract (english):
Rapid guessing can threaten measurement invariance and the validity of large-scale assessments, which are often conducted under low-stakes conditions. Comparing measures collected under different administration modes or in different test settings necessitates that rapid guessing rates also be comparable. Response time thresholds can be used to identify rapid guessing behavior. Using data from an experiment embedded in an assessment of university students as part of the National Educational Panel Study (NEPS), we show that rapid guessing rates can differ across modes. Specifically, rapid guessing rates are found to be higher for un-proctored individual online assessment. It is also shown that rapid guessing rates differ across different groups of students and are related to properties of the test design. No relationship between dropout behavior and rapid guessing rates was found. (DIPF/Orig.)
DIPF-Departments:
Bildungsqualität und Evaluation
ICT engagement. A new construct and its assessment in PISA 2015
Kunina-Habenicht, Olga; Goldhammer, Frank
Journal Article
| In: Large-scale Assessments in Education | 2020
40318 Endnote
Author(s):
Kunina-Habenicht, Olga; Goldhammer, Frank
Title:
ICT engagement. A new construct and its assessment in PISA 2015
In:
Large-scale Assessments in Education, (2020) , S. 8:6
DOI:
10.1186/s40536-020-00084-z
URN:
urn:nbn:de:0111-pedocs-232754
URL:
https://nbn-resolving.org/urn:nbn:de:0111-pedocs-232754
Publication Type:
3a. Beiträge in begutachteten Zeitschriften; Aufsatz (keine besondere Kategorie)
Language:
Englisch
Keywords:
Informations- und Kommunikationstechnologie; Kompetenz; Interesse; Nutzung; Selbstständigkeit; Soziale Interaktion; PISA <Programme for International Student Assessment>; Datenanalyse; Variable; Beziehung; Strukturgleichungsmodell; Schweiz; Deutschland
Abstract (english):
As a relevant cognitive-motivational aspect of ICT literacy, a new construct ICT Engagement is theoretically based on self-determination theory and involves the factors ICT interest, Perceived ICT competence, Perceived autonomy related to ICT use, and ICT as a topic in social interaction. In this manuscript, we present different sources of validity supporting the construct interpretation of test scores in the ICT Engagement scale, which was used in PISA 2015. Specifically, we investigated the internal structure by dimensional analyses and investigated the relation of ICT Engagement aspects to other variables. The analyses are based on public data from PISA 2015 main study from Switzerland (n = 5860) and Germany (n = 6504). First, we could confirm the four-dimensional structure of ICT Engagement for the Swiss sample using a structural equation modelling approach. Second, ICT Engagement scales explained the highest amount of variance in ICT Use for Entertainment, followed by Practical use. Third, we found significantly lower values for girls in all ICT Engagement scales except ICT Interest. Fourth, we found a small negative correlation between the scores in the subscale "ICT as a topic in social interaction" and reading performance in PISA 2015. We could replicate most results for the German sample. Overall, the obtained results support the construct interpretation of the four ICT Engagement subscales. (DIPF/Orig.)
DIPF-Departments:
Bildungsqualität und Evaluation
Reanalysis of the German PISA data. A comparison of different approaches for trend estimation with […]
Robitzsch, Alexander; Lüdtke, Oliver; Goldhammer, Frank; Kröhne, Ulf; Köller, Olaf
Journal Article
| In: Frontiers in Psychology | 2020
40319 Endnote
Author(s):
Robitzsch, Alexander; Lüdtke, Oliver; Goldhammer, Frank; Kröhne, Ulf; Köller, Olaf
Title:
Reanalysis of the German PISA data. A comparison of different approaches for trend estimation with a particular emphasis on mode effects
In:
Frontiers in Psychology, (2020) , S. 11:884
DOI:
10.3389/fpsyg.2020.00884
URN:
urn:nbn:de:0111-pedocs-232269
URL:
https://nbn-resolving.org/urn:nbn:de:0111-pedocs-232269
Publication Type:
3a. Beiträge in begutachteten Zeitschriften; Aufsatz (keine besondere Kategorie)
Language:
Englisch
Keywords:
PISA <Programme for International Student Assessment>; Test; Verfahren; Skalierung; Methode; Technologiebasiertes Testen; Veränderung; Entwicklung; Wirkungsforschung; Deutschland
Abstract:
International large-scale assessments, such as the Program for International Student Assessment (PISA), are conducted to provide information on the effectiveness of education systems. In PISA, the target population of 15-year-old students is assessed every 3 years. Trends show whether competencies have changed in the countries between PISA cycles. In order to provide valid trend estimates, it is desirable to retain the same test conditions and statistical methods in all PISA cycles. In PISA 2015, however, the test mode changed from paper-based to computer-based tests, and the scaling method was changed. In this paper, we investigate the effects of these changes on trend estimation in PISA using German data from all PISA cycles (2000-2015). Our findings suggest that the change from paper-based to computer-based tests could have a severe impact on trend estimation but that the change of the scaling model did not substantially change the trend estimates.
DIPF-Departments:
Bildungsqualität und Evaluation
ReCo: Textantworten automatisch auswerten. Methodenworkshop
Zehner, Fabian; Andersen, Nico
Journal Article
| In: Zeitschrift für Soziologie der Erziehung und Sozialisation | 2020
40196 Endnote
Author(s):
Zehner, Fabian; Andersen, Nico
Title:
ReCo: Textantworten automatisch auswerten. Methodenworkshop
In:
Zeitschrift für Soziologie der Erziehung und Sozialisation, 40 (2020) 3, S. 334-340
DOI:
10.25656/01:22115
URN:
urn:nbn:de:0111-pedocs-221153
URL:
https://nbn-resolving.org/urn:nbn:de:0111-pedocs-221153
Publication Type:
3b. Beiträge in weiteren Zeitschriften; praxisorientiert
Language:
Deutsch
Keywords:
Software; Technologiebasiertes Testen; Antwort; Text; Testauswertung; Automatisierung; Datenanalyse; Konzeption; Methodik
Abstract:
Mit dem vorliegenden Beitrag wird erstmalig der Prototyp einer R- sowie Java-basierten und frei verfügbaren Software veröffentlicht, die für die Verwendung mit deutschen Textantworten evaluiert wurde und aktuell für weitere Sprachen weiter entwickelt wird: ReCo (Automatic Text Response Coder; Zehner, Sälzer & Goldhammer, 2016). ReCo ist auf Kurztextantworten spezialisiert und adressiert Semantik, weshalb auch von Inhaltsscoring die Rede ist. Die hier vorgestellte Software enthält einen Demodatensatz, bei dem es wichtig ist, vorab anzumerken, dass dieser und die hier zitierten Beispielantworten lediglich eine sehr geringe Sprachvielfalt enthalten. Das liegt daran, dass dieser Datensatz auf empirischen Daten basiert und wegen deren Vertraulichkeit umfangreich manuell manipuliert wurde, was mit sprachlich komplexeren Items nicht möglich gewesen wäre. Die ReCo-Methodik selbst funktioniert hingegen auch bei komplexeren Antworten [...]. Dieser Beitrag skizziert kurz die ReCo-Methodik und stellt erstmals die Shiny-App vor, die automatisches Kodieren für eigene Daten flexibel anwendbar macht. Dafür wird skizziert, wie der aktuell verfügbare Prototyp installiert und auf einen Demodatensatz angewendet wird. Zuletzt gibt der Beitrag einen Ausblick, welche Funktionalitäten die App nach Verlassen der aktuellen Prototypenphase sowie in der langfristigen Entwicklung haben wird. Aktuelle Entwicklungen können auf der ReCo-Webseite verfolgt werden: www.reco.science (DIPF/Orig.)
DIPF-Departments:
Bildungsqualität und Evaluation
Unselect matches
Select all matches
Export
1
2
3
...
12
Next 10 items
>
Show all
(115)