Menü Überspringen
Kontakt
Presse
Deutsch
English
Not track
Datenverarbeitung
Suche
Anmelden
DIPF aktuell
Forschung
Infrastrukturen
Institut
Zurück
Kontakt
Presse
Deutsch
English
Not track
Datenverarbeitung
Suche
Startseite
>
Forschung
>
Publikationen
>
Publikationendatenbank
Ergebnis der Suche in der DIPF Publikationendatenbank
Ihre Abfrage:
(Schlagwörter: "Computerunterstütztes Verfahren")
zur erweiterten Suche
Suchbegriff
Nur Open Access
Suchen
Markierungen aufheben
Alle Treffer markieren
Export
109
Inhalte gefunden
Alle Details anzeigen
Daily variability in working memory is coupled with negative affect. The role of attention and […]
Brose, Annette; Schmiedek, Florian; Lövdén, Martin; Lindenberger, Ulman
Zeitschriftenbeitrag
| In: Emotion | 2012
32712 Endnote
Autor*innen:
Brose, Annette; Schmiedek, Florian; Lövdén, Martin; Lindenberger, Ulman
Titel:
Daily variability in working memory is coupled with negative affect. The role of attention and motivation
In:
Emotion, 12 (2012) 3, S. 605-617
DOI:
10.1037/a0024436
URL:
https://doi.apa.org/record/2011-15460-001
Dokumenttyp:
3a. Beiträge in begutachteten Zeitschriften; Aufsatz (keine besondere Kategorie)
Sprache:
Englisch
Schlagwörter:
Affekt; Arbeitsgedächtnis; Aufgabe; Aufmerksamkeit; Computerunterstütztes Verfahren; Deutschland; Individuum; Junger Erwachsener; Längsschnittuntersuchung; Leistungsfähigkeit; Motivation; Test; Unterschied
Abstract (english):
Across days, individuals experience varying levels of negative affect, control of attention, and motivation. We investigated whether this intraindividual variability was coupled with daily fluctuations in working memory (WM) performance. In 100 days, 101 younger individuals worked on a spatial N-back task and rated negative affect, control of attention, and motivation. Results showed that individuals differed in how reliably WM performance fluctuated across days, and that subjective experiences were primarily linked to performance accuracy. WM performance was lower on days with higher levels of negative affect, reduced control of attention, and reduced task-related motivation. Thus, variables that were found to predict WM in betweensubjects designs showed important relationships to WM at the within-person level. In addition, there was shared predictive variance among predictors of WM. Days with increased negative affect and reduced performance were also days with reduced control of attention and reduced motivation to work on tasks. These findings are in line with proposed mechanisms linking negative affect and cognitive performance.
DIPF-Abteilung:
Bildung und Entwicklung
Prozessbezogene Diagnostik von Lesefähigkeiten bei Grundschulkindern
Richter, Tobias; Isberner, Maj-Britt; Naumann, Johannes; Kutzner, Yvonne
Zeitschriftenbeitrag
| In: Zeitschrift für Pädagogische Psychologie | 2012
33045 Endnote
Autor*innen:
Richter, Tobias; Isberner, Maj-Britt; Naumann, Johannes; Kutzner, Yvonne
Titel:
Prozessbezogene Diagnostik von Lesefähigkeiten bei Grundschulkindern
In:
Zeitschrift für Pädagogische Psychologie, 26 (2012) 4, S. 313-331
DOI:
10.1024/1010-0652/a000079
URL:
https://econtent.hogrefe.com/doi/10.1024/1010-0652/a000079
Dokumenttyp:
3a. Beiträge in begutachteten Zeitschriften; Aufsatz (keine besondere Kategorie)
Sprache:
Deutsch
Schlagwörter:
Computerunterstütztes Verfahren; Diagnostik; Diagnostischer Test; Empirische Untersuchung; Frankfurt a.M.; Grundschule; Grundschulkind; Kognitionspsychologie; Kognitive Prozesse; Köln; Lesefertigkeit; Leseverstehen; Messverfahren; Psychometrie; Querschnittuntersuchung; Schuljahr 01; Schuljahr 02; Schuljahr 03; Schuljahr 04; Unterschied; Validität
Abstract:
Aus kognitionspsychologischer Perspektive beruhen Lesefähigkeiten auf der effizienten Bewältigung von Teilprozessen des Leseverstehens auf Wort-, Satz- und Textebene. In diesem Beitrag stellen wir mit ProDi-L ein neuartiges computergestütztes Diagnostikum vor, das durch die kombinierte Erfassung von Antwortrichtigkeit und Reaktionszeit als Indikatoren für die Zuverlässigkeit und Effizienz einzelner Teilprozesse eine differenzierte prozessbezogene Diagnostik des Leseverstehens bei Grundschulkindern ermöglichen soll. Mittels sechs Subtests sollen zusammenhängende, aber psychometrisch klar trennbare Teilfähigkeiten des Leseverstehens erfasst werden. In einer Querschnittsuntersuchung an 536 Kindern der Klassenstufen 1-4 konnten dieser Annahme entsprechend Belege für die faktorielle Validität von ProDi-L erbracht werden. Die Zusammenhänge der Testwerte von ProDi-L mit kriterialen Lesefähigkeitsmaßen (gemessen mit ELFE 1-6), Lehrerurteilen und sprachfreien Intelligenzmaßen (diskriminante Validität) sprechen außerdem für die Konstrukt- und Kriteriumsvalidität des Instruments.
Abstract (english):
From a cognitive perspective, reading skills depend on efficient component processes of reading comprehension on the word, sentence and text level. In this article, we present the novel computer-based instrument ProDi-L, which uses both accuracy and reaction time as indicators of reliability and efficiency of each component process, thereby allowing for a differentiated and process-oriented assessment of reading comprehension in primary school children. Six subtests were developed to assess related but psychometrically clearly distinguishable component processes. In line with this assumption, a cross-sectional study with 536 children of grades 1-4 confirmed the factorial validity of ProDi-L. Correlations of ProDi-L scores with external measures of reading comprehension (assessed with ELFE 1-6), teacher ratings, and non-verbal intelligence scores also confirmed construct, convergent, and discriminant validity.
DIPF-Abteilung:
Bildungsqualität und Evaluation
Cross-genre and cross-domain detection of semantic uncertainty
Szarvas, György; Vincze, Veronika; Farkas, Richárd; Móra, György; Gurevych, Iryna
Zeitschriftenbeitrag
| In: Computational Linguistics Journal | 2012
32810 Endnote
Autor*innen:
Szarvas, György; Vincze, Veronika; Farkas, Richárd; Móra, György; Gurevych, Iryna
Titel:
Cross-genre and cross-domain detection of semantic uncertainty
In:
Computational Linguistics Journal, 38 (2012) 2, S. 335-367
URL:
http://www.mitpressjournals.org/doi/pdf/10.1162/COLI_a_00098
Dokumenttyp:
3a. Beiträge in begutachteten Zeitschriften; Beitrag in Sonderheft
Sprache:
Englisch
Schlagwörter:
Computerlinguistik; Computerunterstütztes Verfahren; Information; Information Retrieval; Klassifikation; Modell; Natürlichsprachiges System; Semantik; Sprachanalyse; Textanalyse; Wissenschaftsdisziplin
Abstract (english):
Uncertainty is an important linguistic phenomenon that is relevant in various Natural Language Processing applications, in diverse genres from medical to community generated, newswire or scientific discourse and domains from science to humanities. The semantic uncertainty of a proposition can be identified in most cases by using a finite dictionary - i.e. lexical cues - and the key steps of uncertainty detection in an application include the steps of locating the (genre- and domain-specific) lexical cues, disambiguating them, and linking them with the units of interest for the particular application (e.g. identified events in information extraction). In this study, we focus on the genre and domain differences of the context-dependent semantic uncertainty cue recognition task. We introduce a unified subcategorization of semantic uncertainty as different domain applications can apply different uncertainty categories. Based on this categorization, we normalized the annotation of three corpora and present results with a state-of-the-art uncertainty cue recognition model for four fine-grained categories of semantic uncertainty. Our results reveal the domain and genre dependence of the problem; nevertheless, we also show that even a distant source domain dataset can contribute to the recognition and disambiguation of uncertainty cues, efficiently reducing the annotation costs needed to cover a new domain. Thus, the unified subcategorization and domain adaptation for training the models offer an efficient solution for cross-domain and cross-genre semantic uncertainty recognition.
DIPF-Abteilung:
Informationszentrum Bildung
Text reuse detection using a composition of text similarity measures
Bär, Daniel; Zesch, Torsten; Gurevych, Iryna
Sammelbandbeitrag
| Aus: Kay, Martin; Boitet, Christian (Hrsg.): Proceedings of the 24th International Conference on Computational Linguistics (COLING 2012) | Mumbai: The COLING 2012 Organizing Committee | 2012
33289 Endnote
Autor*innen:
Bär, Daniel; Zesch, Torsten; Gurevych, Iryna
Titel:
Text reuse detection using a composition of text similarity measures
Aus:
Kay, Martin; Boitet, Christian (Hrsg.): Proceedings of the 24th International Conference on Computational Linguistics (COLING 2012), Mumbai: The COLING 2012 Organizing Committee, 2012 , S. 167-184
URL:
http://www.aclweb.org/anthology/C/C12/C12-1011.pdf
Dokumenttyp:
4. Beiträge in Sammelbänden; Tagungsband/Konferenzbeitrag/Proceedings
Sprache:
Englisch
Schlagwörter:
Computerunterstütztes Verfahren; Erkennen; Inhalt; Messung; Plagiat; Struktur; Text; Textanalyse; Vergleich
Abstract:
Detecting text reuse is a fundamental requirement for a variety of tasks and applications, ranging from journalistic text reuse to plagiarism detection. Text reuse is traditionally detected by computing similarity between a source text and a possibly reused text. However, existing text similarity measures exhibit a major limitation: They compute similarity only on features which can be derived from the content of the given texts, thereby inherently implying that any other text characteristics are negligible. In this paper, we overcome this traditional limitation and compute similarity along three characteristic dimensions inherent to texts: content, structure, and style. We explore and discuss possible combinations of measures along these dimensions, and our results demonstrate that the composition consistently outperforms previous approaches on three standard evaluation datasets, and that text reuse detection greatly benefits from incorporating a diverse feature set that reflects a wide variety of text characteristics.
DIPF-Abteilung:
Informationszentrum Bildung
Android-based mobile assessment system
Dalir, Mahtab; Rölke, Heiko; Buchal, Björn
Sammelbandbeitrag
| Aus: Biswas, Gautam; Wong, Lung-Hsiang; Hirashima, Tsukasa; Chen, Wenli (Hrsg.): Proceedings of the 20th International Conference on Computers in Education ICCE 2012 | Singapur: National Institute of Education, Nanyang Technological University, Singapore | 2012
33179 Endnote
Autor*innen:
Dalir, Mahtab; Rölke, Heiko; Buchal, Björn
Titel:
Android-based mobile assessment system
Aus:
Biswas, Gautam; Wong, Lung-Hsiang; Hirashima, Tsukasa; Chen, Wenli (Hrsg.): Proceedings of the 20th International Conference on Computers in Education ICCE 2012, Singapur: National Institute of Education, Nanyang Technological University, Singapore, 2012 , S. 370-377
URL:
http://www.lsl.nie.edu.sg/icce2012/wp-content/uploads/2012/11/MAIN-Conference-E-BOOK.pdf
Dokumenttyp:
4. Beiträge in Sammelwerken; Tagungsband/Konferenzbeitrag/Proceedings
Sprache:
Englisch
Schlagwörter:
Computerunterstütztes Verfahren; Fallstudie; Hardware; Leistungsmessung; Mobilität; Mobil-Telefon; Schule; Schülerleistung; Software; Technologiebasiertes Testen
Abstract (english):
Mobile devices such as smartphones and tablet PCs have gained global popularity and are increasingly used in all areas of daily life, including learning and assessment activities. To support different kinds of learning and assessment content, a versatile and comprehensive system is desirable. In addition, other aspects of long-term assessments have to be covered such as security, data protection and adaptivity, e.g. to new schedules or items. In this paper, we present the concept and realization of an Android-based mobile assessment system especially designed for school usage. It has been used already in several research studies in elementary schools, e.g. to measure daily fluctuations of cognitive performance capabilities.
DIPF-Abteilung:
Informationszentrum Bildung
UBY-LMF - A uniform model for standardizing heterogeneous lexical-semantic resources in ISO-LMF
Eckle-Kohler, Judith; Gurevych, Iryna; Hartmann, Silvana; Matuschek, Michael; Meyer, Christian M.
Sammelbandbeitrag
| Aus: Calzolar, Nicoletta (Hrsg.): Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC) | Istanbul: European Language Resources Association | 2012
32693 Endnote
Autor*innen:
Eckle-Kohler, Judith; Gurevych, Iryna; Hartmann, Silvana; Matuschek, Michael; Meyer, Christian M.
Titel:
UBY-LMF - A uniform model for standardizing heterogeneous lexical-semantic resources in ISO-LMF
Aus:
Calzolar, Nicoletta (Hrsg.): Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC), Istanbul: European Language Resources Association, 2012 , S. 275--282
URL:
http://www.ukp.tu-darmstadt.de/fileadmin/user_upload/Group_UKP/publikationen/2012/LREC2012_ubyLMFcamera-Ready.pdf
Dokumenttyp:
4. Beiträge in Sammelwerken; Tagungsband/Konferenzbeitrag/Proceedings
Sprache:
Englisch
Schlagwörter:
Computerlinguistik; Computerunterstütztes Verfahren; Deutsch; Englisch; Information; Lexikon; Mehrsprachigkeit; Modell; Ontologie; Semantic Web; Softwaretechnologie; Soziale Software; Sprachanalyse; Standard
Abstract (english):
We present UBY-LMF, an LMF-based model for large-scale, heterogeneous multilingual lexical-semantic resources (LSRs). UBY-LMF allows the standardization of LSRs down to a fine-grained level of lexical information by employing a large number of Data Categories from ISOCat. We evaluate UBY-LMF by converting nine LSRs in two languages to the corresponding format: the English WordNet, Wiktionary, Wikipedia, OmegaWiki, FrameNet and VerbNet and the German Wikipedia, Wiktionary and GermaNet. The resulting LSR, UBY (Gurevych et al., 2012), holds interoperable versions of all nine resources which can be queried by an easy to use public Java API. UBY-LMF covers a wide range of information types from expert-constructed and collaboratively constructed resources for English and German, also including links between different resources at the word sense level. It is designed to accommodate further resources and languages as well as automatically mined lexical-semantic knowledge.
DIPF-Abteilung:
Informationszentrum Bildung
FlawFinder: A modular system for predicting quality flaws in Wikipedia. Notebook for PAN at CLEF […]
Ferschke, Oliver; Gurevych, Iryna; Rittberger, Marc
Sammelbandbeitrag
| Aus: Forner, Pamela; Karlgren, Jussi; Womser-Hacker, Christa (Hrsg.): CLEF 2012 Labs and Workshop, Notebook Papers | Mattarello: Grafiche Futura s.r.l. | 2012
33097 Endnote
Autor*innen:
Ferschke, Oliver; Gurevych, Iryna; Rittberger, Marc
Titel:
FlawFinder: A modular system for predicting quality flaws in Wikipedia. Notebook for PAN at CLEF 2012
Aus:
Forner, Pamela; Karlgren, Jussi; Womser-Hacker, Christa (Hrsg.): CLEF 2012 Labs and Workshop, Notebook Papers, Mattarello: Grafiche Futura s.r.l., 2012 , S. 101
URL:
http://www.uni-weimar.de/medien/webis/research/events/pan-12/pan12-papers-final/pan12-wikipedia-quality/ferschke12-notebook.pdf
Dokumenttyp:
4. Beiträge in Sammelwerken; Tagungsband/Konferenzbeitrag/Proceedings
Sprache:
Englisch
Schlagwörter:
Computerprogramm; Computerunterstütztes Verfahren; Evaluation; Fehler; Information; Klassifikation; Messung; Nachschlagewerk; Online; Qualität; Qualitätssicherung
Abstract (english):
With over 23 million articles in 285 languages,Wikipedia is the largest free knowledge base on the web. Due to its open nature, everybody is allowed to access and edit the contents of this huge encyclopedia. As a downside of this open access policy, quality assessment of the content becomes a critical issue and is hardly manageable without computational assistance. In this paper, we present FlawFinder, a modular system for automatically predicting quality flaws in unseen Wikipedia articles. It competed in the inaugural edition of the Quality Flaw Prediction Task at the PAN Challenge 2012 and achieved the best precision of all systems and the second place in terms of recall and F1-score.
DIPF-Abteilung:
Informationszentrum Bildung
Uby - a large-scale unified lexical-semantic resource based on LMF
Gurevych, Iryna; Eckle-Kohler, Judith; Hartmann, Silvana; Matuschek, Michael; Meyer, Christian M.; […]
Sammelbandbeitrag
| Aus: Association for Computational Linguistics (Hrsg.): Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2012) | Avignon: Association for Computational Linguistics | 2012
32696 Endnote
Autor*innen:
Gurevych, Iryna; Eckle-Kohler, Judith; Hartmann, Silvana; Matuschek, Michael; Meyer, Christian M.; Wirth, Christian
Titel:
Uby - a large-scale unified lexical-semantic resource based on LMF
Aus:
Association for Computational Linguistics (Hrsg.): Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2012), Avignon: Association for Computational Linguistics, 2012 , S. 580-590
URL:
http://www.ukp.tu-darmstadt.de/fileadmin/user_upload/Group_UKP/publikationen/2012/uby_eacl2012_cameraready.pdf
Dokumenttyp:
4. Beiträge in Sammelwerken; Tagungsband/Konferenzbeitrag/Proceedings
Sprache:
Englisch
Schlagwörter:
Computerlinguistik; Computerunterstütztes Verfahren; Deutsch; Englisch; Information; Lexikon; Mehrsprachigkeit; Modell; Ontologie; Semantic Web; Softwaretechnologie; Soziale Software; Sprachanalyse; Standard
Abstract (english):
We present UBY, a large-scale lexical semantic resource combining a wide range of information from expert-constructed and collaboratively constructed resources for English and German. It currently contains nine resources in two languages: English WordNet, Wiktionary, Wikipedia, FrameNet and VerbNet, German Wikipedia, Wiktionary and GermaNet, and multilingual OmegaWiki modeled according to the LMF standard. For FrameNet, VerbNet and all collaboratively constructed resources, this is done for the first time. Our LMF model captures lexical information at a fine-grained level by employing a large number of Data Categories from ISOCat and is designed to be directly extensible by new languages and resources. All resources in UBY can be accessed with an easy to use publicly available API.
DIPF-Abteilung:
Informationszentrum Bildung
Discriminative clustering for market segmentation
Haider, Peter; Chiarandini, Luca; Brefeld, Ulf
Sammelbandbeitrag
| Aus: Association of Computational Linguistics (ACL) (Hrsg.): Proceedings of the ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2012 | New York: Association for Computing Machinery | 2012
33576 Endnote
Autor*innen:
Haider, Peter; Chiarandini, Luca; Brefeld, Ulf
Titel:
Discriminative clustering for market segmentation
Aus:
Association of Computational Linguistics (ACL) (Hrsg.): Proceedings of the ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2012, New York: Association for Computing Machinery, 2012 , S. 417-425
DOI:
10.1145/2339530.2339600
URL:
http://dl.acm.org/citation.cfm?id=2339530.2339600&coll=DL&dl=GUIDE&CFID=343017233&CFTOKEN=79756621
Dokumenttyp:
4. Beiträge in Sammelwerken; Tagungsband/Konferenzbeitrag/Proceedings
Sprache:
Englisch
Schlagwörter:
Algorithmus; Computerunterstütztes Verfahren; Datenanalyse; Evaluation; Interaktion; Internet; Logdatei; Marktwirtschaft; Nutzerverhalten; Prognose; Suchmaschine
Abstract:
We study discriminative clustering for market segmentation tasks. The underlying problem setting resembles discriminative clustering, however, existing approaches focus on the prediction of univariate cluster labels. By contrast, market segments encode complex (future) behavior of the individuals which cannot be represented by a single variable. In this paper, we generalize discriminative clustering to structured and complex output variables that can be represented as graphical models. We devise two novel methods to jointly learn the classifier and the clustering using alternating optimization and collapsed inference, respectively. The two approaches jointly learn a discriminative segmentation of the input space and a generative output prediction model for each segment. We evaluate our methods on segmenting user navigation sequences from Yahoo! News. The proposed collapsed algorithm is observed to outperform baseline approaches such as mixture of experts. We showcase exemplary projections of the resulting segments to display the interpretability of the solutions.
DIPF-Abteilung:
Informationszentrum Bildung
Mining multiword terms from Wikipedia
Hartmann, Silvana; Szarvas, György; Gurevych, Iryna
Sammelbandbeitrag
| Aus: Pazienza, Maria Teresa; Stellato, Armando (Hrsg.): Semi-automatic ontology development: Processes and resources | Hershey; PA: IGI Global | 2012
33101 Endnote
Autor*innen:
Hartmann, Silvana; Szarvas, György; Gurevych, Iryna
Titel:
Mining multiword terms from Wikipedia
Aus:
Pazienza, Maria Teresa; Stellato, Armando (Hrsg.): Semi-automatic ontology development: Processes and resources, Hershey; PA: IGI Global, 2012 , S. 226-258
URL:
http://www.ukp.tu-darmstadt.de/fileadmin/user_upload/Group_UKP/publikationen/2012/hartmann_chap_pazienza_book.pdf
Dokumenttyp:
4. Beiträge in Sammelwerken; Sammelband (keine besondere Kategorie)
Sprache:
Englisch
Schlagwörter:
Computerunterstütztes Verfahren; Fachsprache; Internet; Nachschlagewerk; Ontologie; Statistische Methode; Terminologie
Abstract:
The collection of the specialized vocabulary of a particular domain (terminology) is an important initial step of creating formalized domain knowledge representations (ontologies). Terminology Extraction (TE) aims at automating this process by collecting the relevant domain vocabulary from existing lexical resources or collections of domain texts. In this chapter, the authors address the extraction of multiword terminology, as multiword terms are very frequent in terminology but typically poorly represented in standard lexical resources. They present their method for mining multiword terminology from Wikipedia and the freely available terminology resource that they extracted using the presented method. Terminology extraction based on Wikipedia exploits the advantages of a huge multilingual, domain-transcending knowledge source and large scale structural information that can identify potential multiword units without the need for linguistic processing tools. Thus, while evaluated in English, the proposed method is basically applicable to all languages in Wikipedia.
DIPF-Abteilung:
Informationszentrum Bildung
Markierungen aufheben
Alle Treffer markieren
Export
<
1
...
9
10
(aktuell)
11
>
Alle anzeigen
(109)