Menü Überspringen
Kontakt
Presse
Deutsch
English
Not track
Datenverarbeitung
Suche
Anmelden
DIPF aktuell
Forschung
Infrastrukturen
Institut
Zurück
Kontakt
Presse
Deutsch
English
Not track
Datenverarbeitung
Suche
Startseite
>
Forschung
>
Publikationen
>
Publikationendatenbank
Ergebnis der Suche in der DIPF Publikationendatenbank
Ihre Abfrage:
(Schlagwörter: "Automatisierung")
zur erweiterten Suche
Suchbegriff
Nur Open Access
Suchen
Markierungen aufheben
Alle Treffer markieren
Export
54
Inhalte gefunden
Alle Details anzeigen
To score or not to score. Factors influencing performance and feasibility of automatic content […]
Zesch, Torsten; Horbach, Andrea; Zehner, Fabian
Zeitschriftenbeitrag
| In: Educational Measurement: Issues and Practice | 2023
43438 Endnote
Autor*innen:
Zesch, Torsten; Horbach, Andrea; Zehner, Fabian
Titel:
To score or not to score. Factors influencing performance and feasibility of automatic content scoring of text responses
In:
Educational Measurement: Issues and Practice, 42 (2023) 1, S. 44-58
DOI:
10.1111/emip.12544
URL:
https://onlinelibrary.wiley.com/doi/10.1111/emip.12544
Dokumenttyp:
3a. Beiträge in begutachteten Zeitschriften; Beitrag in Sonderheft
Sprache:
Englisch
Schlagwörter:
Antwort; Automatisierung; Bewertung; Einflussfaktor; Inhalt; Leistung; Text; Tool; Verfahren
Abstract (english):
In this article, we systematize the factors influencing performance and feasibility of automatic content scoring methods for short text responses. We argue that performance (i.e., how well an automatic system agrees with human judgments) mainly depends on the linguistic variance seen in the responses and that this variance is indirectly influenced by other factors such as target population or input modality. Extending previous work, we distinguish conceptual, realization, and nonconformity variance, which are differentially impacted by the various factors. While conceptual variance relates to different concepts embedded in the text responses, realization variance refers to their diverse manifestation through natural language. Nonconformity variance is added by aberrant response behavior. Furthermore, besides its performance, the feasibility of using an automatic scoring system depends on external factors, such as ethical or computational constraints, which influence whether a system with a given performance is accepted by stakeholders. Our work provides (i) a framework for assessment practitioners to decide a priori whether automatic content scoring can be successfully applied in a given setup as well as (ii) new empirical findings and the integration of empirical findings from the literature on factors that influence automatic systems' performance. (DIPF/Orig.)
DIPF-Abteilung:
Lehr und Lernqualität in Bildungseinrichtungen
KI-Tutorial an der Hochschule Darmstadt
Kullmann, Sylvia
Verschiedenartige Dokumente
| 2023
43639 Endnote
Autor*innen:
Kullmann, Sylvia
Titel:
KI-Tutorial an der Hochschule Darmstadt
Erscheinungsvermerk:
2023
URL:
https://dgi-info.de/die-modelle-hinter-chatgpt-ki-tutorial-an-der-hochschule-darmstadt/
Dokumenttyp:
7. Blogbeiträge; Pod-; Vidcasts; Blogbeiträge
Sprache:
Deutsch
Schlagwörter:
Automatisierung; Kompetenz; Künstliche Intelligenz; Sprachmodell; Sprachverarbeitung; Zukunft
Abstract:
Wie sieht KI die Welt? Diese und andere Fragen waren Gegenstand eines virtuellen KI-Tutorials am 28. März 2023 an der Hochschule Darmstadt. In acht Einheiten führten Prof. Dr. Markus Döhring und sein Team aus angehenden Data Scientists an der Hochschule Darmstadt Teilnehmende mit und ohne Informatik-Vorkenntnisse durch die Welt der KI-Sprachmodelle.
DIPF-Abteilung:
Informationszentrum Bildung
Methods and perspectives for the automated analytic assessment of free-text responses in formative […]
Gombert, Sebastian
Sammelbandbeitrag
| Aus: Jivet, Joana; Di Mitri, Daniele; Schneider, Jan; Papamitsiou, Zacharoula; Fominykh, Mikhail (Hrsg.): Proceedings of the Doctoral Consortium of the 17th European Conference on Technology Enhanced Learning co-located with the 17th European Conference on Technology Enhanced Learning (EC-TEL 2022), Toulouse, France, September 12, 2022 | Aachen: RWTH | 2022
43505 Endnote
Autor*innen:
Gombert, Sebastian
Titel:
Methods and perspectives for the automated analytic assessment of free-text responses in formative scenarios
Aus:
Jivet, Joana; Di Mitri, Daniele; Schneider, Jan; Papamitsiou, Zacharoula; Fominykh, Mikhail (Hrsg.): Proceedings of the Doctoral Consortium of the 17th European Conference on Technology Enhanced Learning co-located with the 17th European Conference on Technology Enhanced Learning (EC-TEL 2022), Toulouse, France, September 12, 2022, Aachen: RWTH, 2022 (CEUR Workshop Proceedings, 3292), S. 61-65
URL:
https://ceur-ws.org/Vol-3292/DCECTEL2022_paper08.pdf
Dokumenttyp:
4. Beiträge in Sammelbänden; Tagungsband/Konferenzbeitrag/Proceedings
Sprache:
Englisch
Schlagwörter:
Antwort; Aufsatz; Automatisierung; Benotung; Bewertung; Spracherkennung; Test; Text
Abstract:
Assessment is the process of testing learners' skills and knowledge. Free-text response items are well suited for the assessment of learners' active knowledge and writing skills. However, the automatic assessment of respective responses is not trivial and requires the application of natural language processing. Accordingly, the automatic assessment of free-text responses is a widely researched topic in educational natural language processing. Most past work targets holistic scoring, the process of assigning overall scores or grades to responses. This is problematic in formative scenarios because learners require feedback rather than summative scores in such scenarios. Such feedback ideally targets specific aspects of responses, and, accordingly, automated systems which only predict holistic scores cannot be used as a basis for providing the same. What is instead needed are systems which implement analytic scoring approaches. Analytic scoring targets specific aspects of responses and scores them according to corresponding criteria. This requires different systems than addressed by the broad research on automated holistic scoring. In my PhD work which is outlined by this paper, I want to explore approaches for implementing analytic scoring systems by means of state-of-the-art natural language processing. These systems are targeted at providing a basis for feedback generation. (DIPF/Orig.)
DIPF-Abteilung:
Informationszentrum Bildung
A short history, emerging challenges and co-operation structures for Artificial Intelligence in […]
Mavrikis, Manolis; Cukurova, Mutlu; Di Mitri, Daniele; Schneider, Jan; Drachsler, Hendrik
Zeitschriftenbeitrag
| In: Bildung und Erziehung | 2021
41559 Endnote
Autor*innen:
Mavrikis, Manolis; Cukurova, Mutlu; Di Mitri, Daniele; Schneider, Jan; Drachsler, Hendrik
Titel:
A short history, emerging challenges and co-operation structures for Artificial Intelligence in education
In:
Bildung und Erziehung, (2021) 74:3, S. 249-263
DOI:
10.13109/buer.2021.74.3.249
URL:
https://doi.org/10.13109/buer.2021.74.3.249
Dokumenttyp:
3a. Beiträge in begutachteten Zeitschriften; Bibliografien/Rezensionen u.ä. (z.B. Linktipps)
Sprache:
Englisch
Schlagwörter:
Künstliche Intelligenz; Digitalisierung; Bildung; Ethik; Geschichte <Histor>; Kooperation; Lernprozess; Datenanalyse; Feedback; Automatisierung; Digitale Medien; Medieneinsatz; Data Mining; Lernforschung; Lehrer; Roboter; Implementierung; Vertrauen; Akzeptanz
Abstract:
Der vorliegende Beitrag präsentiert für das Themenheft über Künstliche Intelligenz und Pädagogik eine kurze Geschichte der Forschung auf diesem Gebiet und fasst aktuelle Herausforderungen zusammen. Der Artikel fokussiert auf mögliche Paradigmenwechsel auf dem Forschungsgebiet und betont die Notwendigkeit der Betrachtung von Theorie und Praxis unter Beachtung ethischer Grundsätze. Abschließend wird auf internationale Kooperationsstrukturen in diesem Bereich hingewiesen, welche interdisziplinäre Perspektiven und methodische Vorgehen unterstützen können, die für die Forschung in diesem Bereich erforderlich sind. (DIPF/Orig.)
Abstract (english):
To accompany the special issue in Artificial Intelligence and Education, this article presents a short history of research in the field and summarises emerging challenges. We highlight key paradigm shifts that are becoming possible but also the need to pay attention to theory, implementation and pedagogy while adhering to ethical principles. We conclude by drawing attention to international co-operation structures in the field that can support the interdiscipniary perspectives and methods required to undertake research in the area. (DIPF/Orig.)
DIPF-Abteilung:
Informationszentrum Bildung
Advancements in technology-based assessment. Emerging item formats, test designs, and data sources
Goldhammer, Frank; Scherer, Ronny; Greiff, Samuel (Hrsg.)
Sammelband
| Lausanne: Frontiers Media | 2020
39802 Endnote
Herausgeber*innen:
Goldhammer, Frank; Scherer, Ronny; Greiff, Samuel
Titel:
Advancements in technology-based assessment. Emerging item formats, test designs, and data sources
Erscheinungsvermerk:
Lausanne: Frontiers Media, 2020 (Frontiers in Psychology. Sonderheft)
DOI:
10.3389/fpsyg.2019.03047
URL:
https://www.frontiersin.org/research-topics/7841/advancements-in-technology-based-assessment-emerging-item-formats-test-designs-and-data-sources
Dokumenttyp:
2. Herausgeberschaft; Zeitschriftensonderheft
Sprache:
Englisch
Schlagwörter:
Technologiebasiertes Testen; Item; Test; Design; Auswertung; Automatisierung; Prozessdatenverarbeitung; Lernen; Bewertung
Abstract (english):
Technology has become an indispensable tool for educational and psychological assessment in today's world. Researchers and large-scale assessment programs alike are increasingly using digital technology (e.g., laptops, tablets, and smartphones) to collect behavioral data beyond the mere idea of responses as correct. Along these lines, technology innovates and enhances assessments in terms of item and test design, methods of test delivery, data collection and analysis, as well as the reporting of test results. The aim of this Research Topic is to present recent advancements in technology-based assessment. Our focus is on cognitive assessments, including the measurement of abilities, competencies, knowledge, and skills but may also include non-cognitive aspects of the assessment. In the area of (cognitive) assessments the innovations driven by technology are manifold: Digital assessments facilitate the creation of new types of stimuli and response formats that were out of reach for assessments using paper; for instance, interactive simulations including multimedia elements, as well as virtual or augmented realities which serve as the task environment. Moreover, technology allows the automated generation of items based on specific item models. Such items can be assembled into tests in a more flexible way than that offered by paper-and-pencil tests and could even be created on the fly; for instance, tailoring item difficulty to individual ability (adaptive testing), while assuring that multiple content constraints are met. As a requirement for adaptive testing or to lower the burden of raters coding item responses manually, computers enable the automatic scoring of constructed responses; for instance, text responses can be scored automatically by using natural language processing and text mining. Technology-based assessments provide not only response data (e.g., correct vs. incorrect responses) but also process data (e.g., frequencies and sequences of test-taking strategies, including navigation behavior) which reflects the course of solving a test item. Process data has been used successfully, among others, to evaluate the data quality, to define process-oriented constructs, to improve measurement precision, and to address substantial research questions. We expect the contributions of this Research Topic to build on this research by considering how technology can further improve, and enhance, educational and psychological assessment. Regarding educational testing, both research papers on the assessment of learning (e.g., summative assessment of learning outcomes) and on the assessment for learning (e.g., formative assessment to support the learning process) are welcome. We expect submissions of empirical papers that present and evaluate innovative technology-based assessment approaches, as well as new applications or illustrations of already existing approaches. We are also interested in papers addressing the validity of test scores and other indicators obtained from innovative assessment procedures.
DIPF-Abteilung:
Bildungsqualität und Evaluation
ReCo: Textantworten automatisch auswerten. Methodenworkshop
Zehner, Fabian; Andersen, Nico
Zeitschriftenbeitrag
| In: Zeitschrift für Soziologie der Erziehung und Sozialisation | 2020
40196 Endnote
Autor*innen:
Zehner, Fabian; Andersen, Nico
Titel:
ReCo: Textantworten automatisch auswerten. Methodenworkshop
In:
Zeitschrift für Soziologie der Erziehung und Sozialisation, 40 (2020) 3, S. 334-340
DOI:
10.25656/01:22115
URN:
urn:nbn:de:0111-pedocs-221153
URL:
https://nbn-resolving.org/urn:nbn:de:0111-pedocs-221153
Dokumenttyp:
3b. Beiträge in weiteren Zeitschriften; praxisorientiert
Sprache:
Deutsch
Schlagwörter:
Software; Technologiebasiertes Testen; Antwort; Text; Testauswertung; Automatisierung; Datenanalyse; Konzeption; Methodik
Abstract:
Mit dem vorliegenden Beitrag wird erstmalig der Prototyp einer R- sowie Java-basierten und frei verfügbaren Software veröffentlicht, die für die Verwendung mit deutschen Textantworten evaluiert wurde und aktuell für weitere Sprachen weiter entwickelt wird: ReCo (Automatic Text Response Coder; Zehner, Sälzer & Goldhammer, 2016). ReCo ist auf Kurztextantworten spezialisiert und adressiert Semantik, weshalb auch von Inhaltsscoring die Rede ist. Die hier vorgestellte Software enthält einen Demodatensatz, bei dem es wichtig ist, vorab anzumerken, dass dieser und die hier zitierten Beispielantworten lediglich eine sehr geringe Sprachvielfalt enthalten. Das liegt daran, dass dieser Datensatz auf empirischen Daten basiert und wegen deren Vertraulichkeit umfangreich manuell manipuliert wurde, was mit sprachlich komplexeren Items nicht möglich gewesen wäre. Die ReCo-Methodik selbst funktioniert hingegen auch bei komplexeren Antworten [...]. Dieser Beitrag skizziert kurz die ReCo-Methodik und stellt erstmals die Shiny-App vor, die automatisches Kodieren für eigene Daten flexibel anwendbar macht. Dafür wird skizziert, wie der aktuell verfügbare Prototyp installiert und auf einen Demodatensatz angewendet wird. Zuletzt gibt der Beitrag einen Ausblick, welche Funktionalitäten die App nach Verlassen der aktuellen Prototypenphase sowie in der langfristigen Entwicklung haben wird. Aktuelle Entwicklungen können auf der ReCo-Webseite verfolgt werden: www.reco.science (DIPF/Orig.)
DIPF-Abteilung:
Bildungsqualität und Evaluation
Detecting mistakes in CPR training with multimodal data and neural networks
Di Mitri, Daniele; Schneider, Jan; Specht, Marcus; Drachsler, Hendrik
Zeitschriftenbeitrag
| In: Sensors | 2019
39363 Endnote
Autor*innen:
Di Mitri, Daniele; Schneider, Jan; Specht, Marcus; Drachsler, Hendrik
Titel:
Detecting mistakes in CPR training with multimodal data and neural networks
In:
Sensors, 19 (2019) 14, S. 3099
DOI:
10.3390/s19143099
URL:
https://www.mdpi.com/1424-8220/19/14/3099
Dokumenttyp:
3a. Beiträge in begutachteten Zeitschriften; Beitrag in Sonderheft
Sprache:
Englisch
Schlagwörter:
Neuropsychologie; Psychomotorik; Praktisches Lernen; Student; Medizin; Lernprozess; Datenanalyse; Computerprogramm; Messung; Fehler; Feedback; Automatisierung; Tutorensystem; Validität; Indikator
Abstract:
This study investigated to what extent multimodal data can be used to detect mistakes during Cardiopulmonary Resuscitation (CPR) training. We complemented the Laerdal QCPR ResusciAnne manikin with the Multimodal Tutor for CPR, a multi-sensor system consisting of a Microsoft Kinect for tracking body position and a Myo armband for collecting electromyogram information. We collected multimodal data from 11 medical students, each of them performing two sessions of two-minute chest compressions (CCs). We gathered in total 5254 CCs that were all labelled according to five performance indicators, corresponding to common CPR training mistakes. Three out of five indicators, CC rate, CC depth and CC release, were assessed automatically by the ReusciAnne manikin. The remaining two, related to arms and body position, were annotated manually by the research team. We trained five neural networks for classifying each of the five indicators. The results of the experiment show that multimodal data can provide accurate mistake detection as compared to the ResusciAnne manikin baseline. We also show that the Multimodal Tutor for CPR can detect additional CPR training mistakes such as the correct use of arms and body weight. Thus far, these mistakes were identified only by human instructors. Finally, to investigate user feedback in the future implementations of the Multimodal Tutor for CPR, we conducted a questionnaire to collect valuable feedback aspects of CPR training. (DIPF/Orig.)
DIPF-Abteilung:
Informationszentrum Bildung
Künstliche Intelligenz in der Bildung. Ihr Potenzial und der Mythos des Lehrkraftroboters
Zehner, Fabian
Zeitschriftenbeitrag
| In: Schulmanagement | 2019
39117 Endnote
Autor*innen:
Zehner, Fabian
Titel:
Künstliche Intelligenz in der Bildung. Ihr Potenzial und der Mythos des Lehrkraftroboters
In:
Schulmanagement, 50 (2019) 2, S. 8-12
URN:
urn:nbn:de:0111-pedocs-175625
URL:
http://nbn-resolving.org/urn:nbn:de:0111-pedocs-175625
Dokumenttyp:
3b. Beiträge in weiteren Zeitschriften; praxisorientiert
Sprache:
Deutsch
Schlagwörter:
Datenanalyse; Anwendungsbeispiel; Künstliche Intelligenz; Bildung; Begriff; Definition; Computer; Daten; Roboter; Computerunterstütztes Lernen; Lernumgebung; Anpassung; Computerprogramm; Medieneinsatz; Einflussfaktor; Lernprozess; Fernunterricht; Kooperatives Lernen; Leistungsbeurteilung; Automatisierung
Abstract:
Was kann künstliche Intelligenz wirklich? Und wie können wir sie als gewinnbringend im Bildungssektor einsetzen? Sollten wir Angst davor haben, dass der Klassenlehrer unserer Enkelkinder in wenigen Jahrzehnten eduBot heißen könnte? dieser Beitrag beleuchtet anhand verschiedener Anwendungsbeispiele, welches Potenzial tatsächlich hinter künstlicher Intelligenz streckt. (DIPF/Orig.)
DIPF-Abteilung:
Bildungsqualität und Evaluation
Argumentation mining in user-generated web discourse
Habernal, Ivan; Gurevych, Iryna
Zeitschriftenbeitrag
| In: Computational Linguistics Journal | 2017
36233 Endnote
Autor*innen:
Habernal, Ivan; Gurevych, Iryna
Titel:
Argumentation mining in user-generated web discourse
In:
Computational Linguistics Journal, 43 (2017) 1, S. 125-179
DOI:
10.1162/COLI_a_00276
URL:
http://www.mitpressjournals.org/doi/abs/10.1162/COLI_a_00276#.WIDIonpp-nU
Dokumenttyp:
3a. Beiträge in begutachteten Zeitschriften; Aufsatz (keine besondere Kategorie)
Sprache:
Englisch
Schlagwörter:
Argumentation; Automatisierung; Computerlinguistik; Data Mining; Diskurs; Erziehungswissenschaft; Information Retrieval; Modell; Reliabilität; Soziale Software; Textanalyse; World wide web 2.0
Abstract:
The goal of argumentation mining, an evolving research field in computational linguistics, is to design methods capable of analyzing people's argumentation. In this article, we go beyond the state of the art in several ways. (i) We deal with actual Web data and take up the challenges given by the variety of registers, multiple domains, and unrestricted noisy user-generated Web discourse. (ii) We bridge the gap between normative argumentation theories and argumentation phenomena encountered in actual data by adapting an argumentation model tested in an extensive annotation study. (iii) We create a new gold standard corpus (90k tokens in 340 documents) and experiment with several machine learning methods to identify argument components. We offer the data, source codes, and annotation guidelines to the community under free licenses. Our findings show that argumentation mining in user-generated Web discourse is a feasible but challenging task. (DIPF/Orig.)
DIPF-Abteilung:
Informationszentrum Bildung
Neural end-to-end learning for computational argumentation mining
Eger, Steffen; Daxenberger, Johannes; Gurevych, Iryna
Sammelbandbeitrag
| Aus: Association for Computational Linguistics (Hrsg.): The 55th Annual Meeting of the Association for Computational Linguistics (ACL 2017): Proceedings of the conference, vol. 1 (long papers), July 30 - August 4, 2017, Vancouver, Canada | Stroudsburg; PA: Association for Computational Linguistics | 2017
37878 Endnote
Autor*innen:
Eger, Steffen; Daxenberger, Johannes; Gurevych, Iryna
Titel:
Neural end-to-end learning for computational argumentation mining
Aus:
Association for Computational Linguistics (Hrsg.): The 55th Annual Meeting of the Association for Computational Linguistics (ACL 2017): Proceedings of the conference, vol. 1 (long papers), July 30 - August 4, 2017, Vancouver, Canada, Stroudsburg; PA: Association for Computational Linguistics, 2017 , S. 11-22
DOI:
10.18653/v1/P17-1002
URL:
https://aclanthology.info/pdf/P/P17/P17-1002.pdf
Dokumenttyp:
4. Beiträge in Sammelbänden; Tagungsband/Konferenzbeitrag/Proceedings
Sprache:
Englisch
Schlagwörter:
Argumentation; Automatisierung; Computerlinguistik; Data Mining; Klassifikation; Rhetorik; Semantik; Textanalyse
Abstract:
We investigate neural techniques for end-to-end computational argumentation mining (AM). We frame AM both as a token-based dependency parsing and as a token-based sequence tagging problem, including a multi-task learning setup. Contrary to models that operate on the argument component level, we find that framing AM as dependency parsing leads to subpar performance results. In contrast, less complex (local) tagging models based on BiL-STMs perform robustly across classification scenarios, being able to catch long-range dependencies inherent to the AM problem. Moreover, we find that jointly learning 'natural' subtasks, in a multi-task learning setup, improves performance. (DIPF/Orig.)
DIPF-Abteilung:
Informationszentrum Bildung
Markierungen aufheben
Alle Treffer markieren
Export
1
(aktuell)
2
3
...
6
>
Alle anzeigen
(54)