Menü Überspringen
Kontakt
Presse
Deutsch
English
Not track
Datenverarbeitung
Suche
Anmelden
DIPF aktuell
Forschung
Infrastrukturen
Institut
Zurück
Kontakt
Presse
Deutsch
English
Not track
Datenverarbeitung
Suche
Startseite
>
Forschung
>
Publikationen
>
Publikationendatenbank
Ergebnis der Suche in der DIPF Publikationendatenbank
Ihre Abfrage:
(Schlagwörter: "Technologie")
zur erweiterten Suche
Suchbegriff
Nur Open Access
Suchen
Markierungen aufheben
Alle Treffer markieren
Export
133
Inhalte gefunden
Alle Details anzeigen
Convergent evidence for validity of a performance-based ICT skills test
Engelhardt, Lena; Naumann, Johannes; Goldhammer, Frank; Frey, Andreas; Wenzel, S. Franziska C.; […]
Zeitschriftenbeitrag
| In: European Journal of Psychological Assessment | 2020
39137 Endnote
Autor*innen:
Engelhardt, Lena; Naumann, Johannes; Goldhammer, Frank; Frey, Andreas; Wenzel, S. Franziska C.; Hartig, Katja; Horz, Holger
Titel:
Convergent evidence for validity of a performance-based ICT skills test
In:
European Journal of Psychological Assessment, 36 (2020) 2, S. 269-279
DOI:
10.1027/1015-5759/a000507
URN:
urn:nbn:de:0111-pedocs-218426
URL:
https://nbn-resolving.org/urn:nbn:de:0111-pedocs-218426
Dokumenttyp:
3a. Beiträge in begutachteten Zeitschriften; Aufsatz (keine besondere Kategorie)
Sprache:
Englisch
Schlagwörter:
Informationstechnologische Bildung; Informations- und Kommunikationstechnologie; Problemlösen; Kompetenz; Fertigkeit; Schüler; Sekundarstufe I; Test; Testaufgabe; Validität; Evidenz; Deutschland
Abstract (english):
The goal of this study was to investigate sources of evidence of convergent validity supporting the construct interpretation of scores on a simulation-based ICT skills test. The construct definition understands ICT skills as reliant on ICT-specific knowledge as well as comprehension and problem-solving skills. On the basis of this, a validity argument comprising three claims was formulated and tested. (1) In line with the classical nomothetic span approach, all three predictor variables explained task success positively across all ICT skills items. As ICT tasks can vary in the extent to which they require construct-related knowledge and skills and in the way related items are designed and implemented, the effects of construct-related predictor variables were expected to vary across items. (2) A task-based analysis approach revealed that the item-level effects of the three predictor variables were in line with the targeted construct interpretation for most items. (3) Finally, item characteristics could significantly explain the random effect of problem-solving skills, but not comprehension skills. Taken together, the obtained results generally support the validity of the construct interpretation.
DIPF-Abteilung:
Bildungsqualität und Evaluation
Evaluation of online information in university students. Development and scaling of the screening […]
Hahnel, Carolin; Eichmann, Beate; Goldhammer, Frank
Zeitschriftenbeitrag
| In: Frontiers in Psychology | 2020
40881 Endnote
Autor*innen:
Hahnel, Carolin; Eichmann, Beate; Goldhammer, Frank
Titel:
Evaluation of online information in university students. Development and scaling of the screening instrument EVON
In:
Frontiers in Psychology, (2020) , S. 11:562128
DOI:
10.3389/fpsyg.2020.562128
URN:
urn:nbn:de:0111-pedocs-232241
URL:
https://nbn-resolving.org/urn:nbn:de:0111-pedocs-232241
Dokumenttyp:
3a. Beiträge in begutachteten Zeitschriften; Aufsatz (keine besondere Kategorie)
Sprache:
Englisch
Schlagwörter:
Deutschland; Internet; Informationskompetenz; Ressource; Glaubwürdigkeit; Relevanz; Bewertung; Test; Testentwicklung; Itemanalyse; Suchmaschine; Simulation; Technologiebasiertes Testen; Interview; Erhebungsinstrument; Evaluation; Student; Rasch-Modell; Empirische Untersuchung;
Abstract:
As Internet sources provide information of varying quality, it is an indispensable prerequisite skill to evaluate the relevance and credibility of online information. Based on the assumption that competent individuals can use different properties of information to assess its relevance and credibility, we developed the EVON (evaluation of online information), an interactive computer-based test for university students. The developed instrument consists of eight items that assess the skill to evaluate online information in six languages. Within a simulated search engine environment, students are requested to select the most relevant and credible link for a respective task. To evaluate the developed instrument, we conducted two studies: (1) a pre-study for quality assurance and observing the response process (cognitive interviews of n = 8 students) and (2) a main study aimed at investigating the psychometric properties of the EVON and its relation to other variables (n = 152 students). The results of the pre-study provided first evidence for a theoretically sound test construction with regard to students' item processing behavior. The results of the main study showed acceptable psychometric outcomes for a standardized screening instrument with a small number of items. The item design criteria affected the item difficulty as intended, and students' choice to visit a website had an impact on their task success. Furthermore, the probability of task success was positively predicted by general cognitive performance and reading skill. Although the results uncovered a few weaknesses (e.g., a lack of difficult items), and the efforts of validating the interpretation of EVON outcomes still need to be continued, the overall results speak in favor of a successful test construction and provide first indication that the EVON assesses students' skill in evaluating online information in search engine environments. (DIPF/Orig.)
DIPF-Abteilung:
Bildungsqualität und Evaluation
Interpretieren im Kontext virtueller Forschungsumgebungen - zu den Potentialen und Grenzen einer […]
Kminek, Helge; Meier, Michael; Schindler, Christoph; Hocker, Julian; Veja, Cornelia
Zeitschriftenbeitrag
| In: Zeitschrift für qualitative Forschung | 2020
40949 Endnote
Autor*innen:
Kminek, Helge; Meier, Michael; Schindler, Christoph; Hocker, Julian; Veja, Cornelia
Titel:
Interpretieren im Kontext virtueller Forschungsumgebungen - zu den Potentialen und Grenzen einer virtuellen Forschungsumgebung und ihres Einsatzes in der akademischen Lehre
In:
Zeitschrift für qualitative Forschung, 21 (2020) 2, S. 185-198
DOI:
10.3224/zqf.v21i2.03
URL:
https://www.budrich-journals.de/index.php/zqf/article/view/36570
Dokumenttyp:
3a. Beiträge in begutachteten Zeitschriften; Beitrag in Sonderheft
Sprache:
Deutsch
Schlagwörter:
Hermeneutik; Objektivität; Methode; Forschungsprozess; Digitalisierung; Computerunterstütztes Verfahren; Datenanalyse; Interpretation; Tool; Softwaretechnologie; Erziehungswissenschaft; Hochschullehre; Methodologie; Kasuistik
Abstract:
In dem Beitrag wird eine für die Methode der Objektiven Hermeneutik entwickelte virtuelle Forschungsumgebung vorgestellt. Mit Blick auf die Forschungsbefunde zum Einsatz von Objektiver Hermeneutik in der akademischen Lehre werden Voraussetzungen und Potentiale für die Einsatzmöglichkeiten einer virtuellen Forschungsumgebung diskutiert. Dies erfolgt unter anderem am Beispiel eines Lehrforschungsseminars, dessen erste Ergebnisse die These stützen, dass die hier vorgestellte virtuelle Forschungsumgebung eine sinnvolle Ergänzung der universitären Methodenlehre darstellen kann. (DIPF/Orig.)
Abstract (english):
The article presents the virtual research environment developed for the method of Objective Hermeneutics. With regard to the research findings on the use of objective hermeneutics in academic teaching, the requirements and potential for the possible uses of a virtual research environment are discussed. The discussion is based, among other things, on the example of a teaching research seminar, the initial results of which support the thesis that the virtual research environment presented here can be a useful supplement to university methodology, especially for Objective Hermeneutics. (DIPF/Orig.)
DIPF-Abteilung:
Informationszentrum Bildung
Rapid guessing rates across administration mode and test setting
Kröhne, Ulf; Deribo, Tobias; Goldhammer, Frank
Zeitschriftenbeitrag
| In: Psychological Test and Assessment Modeling | 2020
40317 Endnote
Autor*innen:
Kröhne, Ulf; Deribo, Tobias; Goldhammer, Frank
Titel:
Rapid guessing rates across administration mode and test setting
In:
Psychological Test and Assessment Modeling, 62 (2020) 2, S. 144-177
DOI:
10.25656/01:23630
URN:
urn:nbn:de:0111-pedocs-236307
URL:
https://nbn-resolving.org/urn:nbn:de:0111-pedocs-236307
Dokumenttyp:
3a. Beiträge in begutachteten Zeitschriften; Beitrag in Sonderheft
Sprache:
Englisch
Schlagwörter:
Test; Bewertung; Innovation; Validität; Technologiebasiertes Testen; Design; Testkonstruktion; Testverfahren; Wirkung; Verhalten; Logdatei; Experiment; Student; Vergleichsuntersuchung
Abstract (english):
Rapid guessing can threaten measurement invariance and the validity of large-scale assessments, which are often conducted under low-stakes conditions. Comparing measures collected under different administration modes or in different test settings necessitates that rapid guessing rates also be comparable. Response time thresholds can be used to identify rapid guessing behavior. Using data from an experiment embedded in an assessment of university students as part of the National Educational Panel Study (NEPS), we show that rapid guessing rates can differ across modes. Specifically, rapid guessing rates are found to be higher for un-proctored individual online assessment. It is also shown that rapid guessing rates differ across different groups of students and are related to properties of the test design. No relationship between dropout behavior and rapid guessing rates was found. (DIPF/Orig.)
DIPF-Abteilung:
Bildungsqualität und Evaluation
ICT engagement. A new construct and its assessment in PISA 2015
Kunina-Habenicht, Olga; Goldhammer, Frank
Zeitschriftenbeitrag
| In: Large-scale Assessments in Education | 2020
40318 Endnote
Autor*innen:
Kunina-Habenicht, Olga; Goldhammer, Frank
Titel:
ICT engagement. A new construct and its assessment in PISA 2015
In:
Large-scale Assessments in Education, (2020) , S. 8:6
DOI:
10.1186/s40536-020-00084-z
URN:
urn:nbn:de:0111-pedocs-232754
URL:
https://nbn-resolving.org/urn:nbn:de:0111-pedocs-232754
Dokumenttyp:
3a. Beiträge in begutachteten Zeitschriften; Aufsatz (keine besondere Kategorie)
Sprache:
Englisch
Schlagwörter:
Informations- und Kommunikationstechnologie; Kompetenz; Interesse; Nutzung; Selbstständigkeit; Soziale Interaktion; PISA <Programme for International Student Assessment>; Datenanalyse; Variable; Beziehung; Strukturgleichungsmodell; Schweiz; Deutschland
Abstract (english):
As a relevant cognitive-motivational aspect of ICT literacy, a new construct ICT Engagement is theoretically based on self-determination theory and involves the factors ICT interest, Perceived ICT competence, Perceived autonomy related to ICT use, and ICT as a topic in social interaction. In this manuscript, we present different sources of validity supporting the construct interpretation of test scores in the ICT Engagement scale, which was used in PISA 2015. Specifically, we investigated the internal structure by dimensional analyses and investigated the relation of ICT Engagement aspects to other variables. The analyses are based on public data from PISA 2015 main study from Switzerland (n = 5860) and Germany (n = 6504). First, we could confirm the four-dimensional structure of ICT Engagement for the Swiss sample using a structural equation modelling approach. Second, ICT Engagement scales explained the highest amount of variance in ICT Use for Entertainment, followed by Practical use. Third, we found significantly lower values for girls in all ICT Engagement scales except ICT Interest. Fourth, we found a small negative correlation between the scores in the subscale "ICT as a topic in social interaction" and reading performance in PISA 2015. We could replicate most results for the German sample. Overall, the obtained results support the construct interpretation of the four ICT Engagement subscales. (DIPF/Orig.)
DIPF-Abteilung:
Bildungsqualität und Evaluation
Ambulatory assessment for physical activity research. State of the science, best practices and […]
Reichert, Markus; Giurgiu, Giurgiu; Koch, Elena; Wieland, Lena M.; Lautenbach, Sven; […]
Zeitschriftenbeitrag
| In: Psychology of Sport and Exercise | 2020
40289 Endnote
Autor*innen:
Reichert, Markus; Giurgiu, Giurgiu; Koch, Elena; Wieland, Lena M.; Lautenbach, Sven; Neubauer, Andreas B.; Haaren-Mack, Birte von; Schilling, Renè; Timm, Irina; Notthoff, Nanna; Marzi, Isabel; Hill, Holger; Brüßler, Sarah; Eckert, Tobias; Fiedler, Janis; Burchartz, Alexander; Anedda, Bastian; Wunsch, Kathrin; Gerber, Markus; Jekauc, Darko; Woll, Alexander; Dunton, Genevieve F.; Kanning, Martina; Nigg, Claudio R.; Ebner-Priemer, Ulrich; Liao, Yue
Titel:
Ambulatory assessment for physical activity research. State of the science, best practices and future directions
In:
Psychology of Sport and Exercise, 50 (2020) , S. 101742
DOI:
10.1016/j.psychsport.2020.101742
URN:
urn:nbn:de:0111-dipfdocs-228896
URL:
https://www.pedocs.de/volltexte/2022/22889/pdf/Neubauer_2020_Ambulatory_Assessment_for_Physical_Activity_Research_A.pdf
Dokumenttyp:
3a. Beiträge in begutachteten Zeitschriften; Beitrag in Sonderheft
Sprache:
Englisch
Schlagwörter:
Computerunterstütztes Verfahren; Selbstbeurteilung; Körper <Biol>; Aktivität; Lebensnähe; Lebenswirklichkeit; Alltagssituation; Technologie; Digitalisierung; Datenerhebungstechnik; Tagebuch; Methode; Verfolgung; Best-Practice-Modell; Zukunftsorientierung; Forschungsstand
Abstract:
Technological and digital progress benefits physical activity (PA) research. Here we compiled expert knowledge on how Ambulatory Assessment (AA) is utilized to advance PA research, i.e., we present results of the 2nd International CAPA Workshop 2019 "Physical Activity Assessment - State of the Science, Best Practices, Future Directions" where invited researchers with experience in PA assessment, evaluation, technology and application participated. First, we provide readers with the state of the AA science, then we give best practice recommendations on how to measure PA via AA and shed light on methodological frontiers, and we furthermore discuss future directions. AA encompasses a class of methods that allows the study of PA and its behavioral, biological and physiological correlates as they unfold in everyday life. AA includes monitoring of movement (e.g., via accelerometry), physiological function (e.g., via mobile electrocardiogram), contextual information (e.g., via geolocation-tracking), and ecological momentary assessment (EMA; e.g., electronic diaries) to capture self-reported information. The strengths of AA are data assessment that near real-time, which minimizes retrospective biases in real-world settings, consequentially enabling ecological valid findings. Importantly, AA enables multiple assessments across time within subjects resulting in intensive longitudinal data (ILD), which allows unraveling within-person determinants of PA in everyday life. In this paper, we show how AA methods such as triggered e-diaries and geolocation-tracking can be used to measure PA and its correlates, and furthermore how these findings may translate into real-life interventions. In sum, AA provides numerous possibilities for PA research, especially the opportunity to tackle within-subject antecedents, concomitants, and consequences of PA as they unfold in everyday life. In-depth insights on determinants of PA could help us design and deliver impactful interventions in real-world contexts, thus enabling us to solve critical health issues in the 21st century such as insufficient PA and high levels of sedentary behavior. (DIPF/Orig.)
DIPF-Abteilung:
Bildung und Entwicklung
Reanalysis of the German PISA data. A comparison of different approaches for trend estimation with […]
Robitzsch, Alexander; Lüdtke, Oliver; Goldhammer, Frank; Kröhne, Ulf; Köller, Olaf
Zeitschriftenbeitrag
| In: Frontiers in Psychology | 2020
40319 Endnote
Autor*innen:
Robitzsch, Alexander; Lüdtke, Oliver; Goldhammer, Frank; Kröhne, Ulf; Köller, Olaf
Titel:
Reanalysis of the German PISA data. A comparison of different approaches for trend estimation with a particular emphasis on mode effects
In:
Frontiers in Psychology, (2020) , S. 11:884
DOI:
10.3389/fpsyg.2020.00884
URN:
urn:nbn:de:0111-pedocs-232269
URL:
https://nbn-resolving.org/urn:nbn:de:0111-pedocs-232269
Dokumenttyp:
3a. Beiträge in begutachteten Zeitschriften; Aufsatz (keine besondere Kategorie)
Sprache:
Englisch
Schlagwörter:
PISA <Programme for International Student Assessment>; Test; Verfahren; Skalierung; Methode; Technologiebasiertes Testen; Veränderung; Entwicklung; Wirkungsforschung; Deutschland
Abstract:
International large-scale assessments, such as the Program for International Student Assessment (PISA), are conducted to provide information on the effectiveness of education systems. In PISA, the target population of 15-year-old students is assessed every 3 years. Trends show whether competencies have changed in the countries between PISA cycles. In order to provide valid trend estimates, it is desirable to retain the same test conditions and statistical methods in all PISA cycles. In PISA 2015, however, the test mode changed from paper-based to computer-based tests, and the scaling method was changed. In this paper, we investigate the effects of these changes on trend estimation in PISA using German data from all PISA cycles (2000-2015). Our findings suggest that the change from paper-based to computer-based tests could have a severe impact on trend estimation but that the change of the scaling model did not substantially change the trend estimates.
DIPF-Abteilung:
Bildungsqualität und Evaluation
ReCo: Textantworten automatisch auswerten. Methodenworkshop
Zehner, Fabian; Andersen, Nico
Zeitschriftenbeitrag
| In: Zeitschrift für Soziologie der Erziehung und Sozialisation | 2020
40196 Endnote
Autor*innen:
Zehner, Fabian; Andersen, Nico
Titel:
ReCo: Textantworten automatisch auswerten. Methodenworkshop
In:
Zeitschrift für Soziologie der Erziehung und Sozialisation, 40 (2020) 3, S. 334-340
DOI:
10.25656/01:22115
URN:
urn:nbn:de:0111-pedocs-221153
URL:
https://nbn-resolving.org/urn:nbn:de:0111-pedocs-221153
Dokumenttyp:
3b. Beiträge in weiteren Zeitschriften; praxisorientiert
Sprache:
Deutsch
Schlagwörter:
Software; Technologiebasiertes Testen; Antwort; Text; Testauswertung; Automatisierung; Datenanalyse; Konzeption; Methodik
Abstract:
Mit dem vorliegenden Beitrag wird erstmalig der Prototyp einer R- sowie Java-basierten und frei verfügbaren Software veröffentlicht, die für die Verwendung mit deutschen Textantworten evaluiert wurde und aktuell für weitere Sprachen weiter entwickelt wird: ReCo (Automatic Text Response Coder; Zehner, Sälzer & Goldhammer, 2016). ReCo ist auf Kurztextantworten spezialisiert und adressiert Semantik, weshalb auch von Inhaltsscoring die Rede ist. Die hier vorgestellte Software enthält einen Demodatensatz, bei dem es wichtig ist, vorab anzumerken, dass dieser und die hier zitierten Beispielantworten lediglich eine sehr geringe Sprachvielfalt enthalten. Das liegt daran, dass dieser Datensatz auf empirischen Daten basiert und wegen deren Vertraulichkeit umfangreich manuell manipuliert wurde, was mit sprachlich komplexeren Items nicht möglich gewesen wäre. Die ReCo-Methodik selbst funktioniert hingegen auch bei komplexeren Antworten [...]. Dieser Beitrag skizziert kurz die ReCo-Methodik und stellt erstmals die Shiny-App vor, die automatisches Kodieren für eigene Daten flexibel anwendbar macht. Dafür wird skizziert, wie der aktuell verfügbare Prototyp installiert und auf einen Demodatensatz angewendet wird. Zuletzt gibt der Beitrag einen Ausblick, welche Funktionalitäten die App nach Verlassen der aktuellen Prototypenphase sowie in der langfristigen Entwicklung haben wird. Aktuelle Entwicklungen können auf der ReCo-Webseite verfolgt werden: www.reco.science (DIPF/Orig.)
DIPF-Abteilung:
Bildungsqualität und Evaluation
Evaluating educational standards using assessment "with" and "through" technology
Frenken, Lena; Libbrecht, Paul; Greefrath, Gilbert; Schiffner, Daniel; Schnitzler, Carola
Sammelbandbeitrag
| Aus: Donevska-Todorova, Ana; Faggiano, Eleonora; Trgalova, Jana; Lavicza, Zsolt; Weinhandl, Robert; Clark-Wilson, Alison; Weigand, Hans-Georg (Hrsg.): Proceedings of the Thenth ERME Topic Conference (ETC 10) on Mathematics Education in the Digital Age (MEDA), 16-18 September 2020 in Linz, Austria | Paris: Centre pour la communication scientifique directe | 2020
40260 Endnote
Autor*innen:
Frenken, Lena; Libbrecht, Paul; Greefrath, Gilbert; Schiffner, Daniel; Schnitzler, Carola
Titel:
Evaluating educational standards using assessment "with" and "through" technology
Aus:
Donevska-Todorova, Ana; Faggiano, Eleonora; Trgalova, Jana; Lavicza, Zsolt; Weinhandl, Robert; Clark-Wilson, Alison; Weigand, Hans-Georg (Hrsg.): Proceedings of the Thenth ERME Topic Conference (ETC 10) on Mathematics Education in the Digital Age (MEDA), 16-18 September 2020 in Linz, Austria, Paris: Centre pour la communication scientifique directe, 2020 , S. 361-368
URL:
https://hal.archives-ouvertes.fr/hal-02932218/document#page=374
Dokumenttyp:
4. Beiträge in Sammelbänden; Tagungsband/Konferenzbeitrag/Proceedings
Sprache:
Englisch
Schlagwörter:
Schüler; Leistungsbeurteilung; Vergleichsarbeit; Bildungsstandards; Mathematik; Technologiebasiertes Testen; Umsetzung; Deutschland
Abstract:
This paper reports on a feasibility study of creating a standardised assessment instrument to evaluate students' competencies found in the German national standards. The study aimed at combining widespread tools in math-classes, such as dynamic geometry and spreadsheets, in an integrated and computer-driven way. We report on the mathematical and technical feasibility: What limits were reached, and which opportunities have appeared? The report provides indications that a development process is feasible but that an attention to the task description is required, as the student may be unaware of the manipulations to perform tasks. (DIPF/Orig.)
DIPF-Abteilung:
Informationszentrum Bildung
Analysing log file data from PIAAC
Goldhammer, Frank; Hahnel, Carolin; Kroehne, Ulf
Sammelbandbeitrag
| Aus: Maehler, Débora B.; Rammstedt, Beatrice (Hrsg.): Large-scale cognitive assessment: Analyzing PIAAC data | Cham: Springer | 2020
40529 Endnote
Autor*innen:
Goldhammer, Frank; Hahnel, Carolin; Kroehne, Ulf
Titel:
Analysing log file data from PIAAC
Aus:
Maehler, Débora B.; Rammstedt, Beatrice (Hrsg.): Large-scale cognitive assessment: Analyzing PIAAC data, Cham: Springer, 2020 (Methodology of Educational Measurement and Assessment), S. 239-269
DOI:
10.1007/978-3-030-47515-4_10
URL:
https://link.springer.com/chapter/10.1007/978-3-030-47515-4_10
Dokumenttyp:
4. Beiträge in Sammelwerken; Sammelband (keine besondere Kategorie)
Sprache:
Englisch
Schlagwörter:
PIAAC (Programme for the International Assessment of Adult Competencies); Technologiebasiertes Testen; Computerunterstütztes Verfahren; Logdatei; Datenanalyse; Software; Tools; Nutzung; Forschung; Zugang; Dokumentation
Abstract:
The OECD Programme for the International Assessment of Adult Competencies (PIAAC) was the first computer-based large-scale assessment to provide anonymised log file data from the cognitive assessment together with extensive online documentation and a data analysis support tool. The goal of the chapter is to familiarise researchers with how to access, understand, and analyse PIAAC log file data for their research purposes. After providing some conceptual background on the multiple uses of log file data and how to infer states of information processing from log file data, previous research using PIAAC log file data is reviewed. Then, the accessibility, structure, and documentation of the PIAAC log file data are described in detail, as well as how to use the PIAAC LogDataAnalyzer to extract predefined process indicators and how to create new process indicators based on the raw log data export.
DIPF-Abteilung:
Bildungsqualität und Evaluation
Markierungen aufheben
Alle Treffer markieren
Export
<
1
2
3
(aktuell)
4
...
14
>
Alle anzeigen
(133)