Menü Überspringen
Kontakt
Presse
Deutsch
English
Not track
Datenverarbeitung
Suche
Anmelden
DIPF aktuell
Forschung
Infrastrukturen
Institut
Zurück
Kontakt
Presse
Deutsch
English
Not track
Datenverarbeitung
Suche
Startseite
>
Forschung
>
Publikationen
>
Publikationendatenbank
Ergebnis der Suche in der DIPF Publikationendatenbank
Ihre Abfrage:
(Schlagwörter: "Algorithmus")
zur erweiterten Suche
Suchbegriff
Nur Open Access
Suchen
Markierungen aufheben
Alle Treffer markieren
Export
30
Inhalte gefunden
Alle Details anzeigen
Visual analysis of point cloud neighborhoods via multi-scale geometric measures
Ritter, Marcel; Schiffner, Daniel; Harders, Matthias
Zeitschriftenbeitrag
| In: Computer Graphics Forum | 2021
41073 Endnote
Autor*innen:
Ritter, Marcel; Schiffner, Daniel; Harders, Matthias
Titel:
Visual analysis of point cloud neighborhoods via multi-scale geometric measures
In:
Computer Graphics Forum, 5 (2021) 3, S. 1-14
DOI:
10.1016/j.visinf.2021.05.001
URL:
https://www.sciencedirect.com/science/article/pii/S2468502X21000206?via%3Dihub
Dokumenttyp:
3a. Beiträge in begutachteten Zeitschriften; Aufsatz (keine besondere Kategorie)
Sprache:
Englisch
Schlagwörter:
Geometrie; Algorithmus; Visualisieren; Daten; Datenverarbeitung; FRAMEWORK; System; Software; Softwareentwicklung; Tool
Abstract (english):
Point-based geometry representations have become widely used in numerous contexts, ranging from particle-based simulations, over stereo image matching, to depth sensing via light detection and ranging. Our application focus is on the reconstruction of curved line structures in noisy 3D point cloud data. Respective algorithms operating on such point clouds often rely on the notion of a local neighborhood. Regarding the latter, our approach employs multi-scale neighborhoods, for which weighted covariance measures of local points are determined. Curved line structures are reconstructed via vector field tracing, using a bidirectional piecewise streamline integration. We also introduce an automatic selection of optimal starting points via multi-scale geometric measures. The pipeline development and choice of parameters was driven by an extensive, automated initial analysis process on over a million prototype test cases. The behavior of our approach is controlled by several parameters - the majority being set automatically, leaving only three to be controlled by a user. In an extensive, automated final evaluation, we cover over one hundred thousand parameter sets, including 3D test geometries with varying curvature, sharp corners, intersections, data holes, and systematically applied varying types of noise. Further, we analyzed different choices for the point of reference in the co-variance computation; using a weighted mean performed best in most cases. In addition, we compared our method to current, publicly available line reconstruction frameworks. Up to thirty times faster execution times were achieved in some cases, at comparable error measures. Finally, we also demonstrate an exemplary application on four real-world 3D light detection and ranging datasets, extracting power line cables.
DIPF-Abteilung:
Informationszentrum Bildung
CERC2020, Collaborative European Research Conference, Belfast, UK, 10 - 11 September 2020, […]
Afli, Haithem; Bleimann, Udo; Burkhard, Dirk; Loew, Robert; Regier, Stefanie; Stengel, Ingo; […] (Hrsg.)
Sammelband
| Aachen: RWTH | 2020
41021 Endnote
Herausgeber*innen:
Afli, Haithem; Bleimann, Udo; Burkhard, Dirk; Loew, Robert; Regier, Stefanie; Stengel, Ingo; Wang, Haiying; Zheng, Huiru Jane
Titel:
CERC2020, Collaborative European Research Conference, Belfast, UK, 10 - 11 September 2020, https://www.cerc-conference.eu, proceedings
Erscheinungsvermerk:
Aachen: RWTH, 2020 (CEUR workshop proceedings, 2815)
URN:
urn:nbn:de:0074-2815-0
URL:
http://ceur-ws.org/Vol-2815
Dokumenttyp:
2. Herausgeberschaft; Sammelband (keine besondere Kategorie)
Sprache:
Englisch
Schlagwörter:
Europa; Forschung; Kooperation; Interdisziplinarität; Datenschutz; Gesetz; Rahmenrichtlinien; Robotik; Software-Agent; Internet; Netzwerk; Software; Neue Technologien; Navigation; Kartierung; Gesundheit; Kommunikation; Softwareentwicklung; Datenverarbeitung; Künstliche Intelligenz; Algorithmus; Statistik; Datenanalyse; Visualisieren; COVID-19; Wirtschaft; Gesellschaft
Abstract (english):
In today's world, which has recently seen fractures and isolation forming among states, international and interdisciplinary collaboration is an increasingly important source of progress. Collaboration is a rich source of innovation and growth. It is the goal of the Collaborative European Research Conference (CERC2020) to foster collaboration among friends and colleagues across disciplines and nations within Europe. CERC emerged from long-standing cooperation between the Cork Institute of Technology, Ireland and Hochschule Darmstadt - University of Applied Sciences, Germany. CERC has grown to include more well-established partners in Germany, the United Kingdom, Greece, Spain, Italy, and many more. CERC is truly interdisciplinary, bringing together new and experienced researchers from science, engineering, business, humanities, and the arts. At CERC researchers not only present their findings as published in their research papers. They are also challenged to collaboratively work out joint aspects of their research during conference sessions and informal social events and gatherings. Organizing such an event involves the hard work of many people. COVID-19 pandemic has impacted our daily life and research. It has been a significant change to CERC2020 and this is the first time the conference was held virtually online. The conference has received submissions from worldwide, not just European countries. Thanks go to the international program committee and my fellow program chairs, particularly to Prof Udo Bleimann for invaluable support throughout the conference. Prof Ingo Stengel, Dr. Haiying Wang, Dr. Ali Haithem, and Dr. Stefanie Regier for supporting me in the review process. Dirk Burkhardt and Dr. Robert Loew put a great effort into setting up the website and conference management system and preparing the conference programme and proceedings. Thank my colleagues from Ulster University, Hochschule Karlsruhe and Hochschule Darmstadt, and the Cork Institute of Technology, Ireland for providing invaluable support to the conference. CERC2020 has received supports from Ulster University, VIsit Belfast, and Belfast City Council. (DIPF/Orig.)
DIPF-Abteilung:
Informationszentrum Bildung
Das Konstrukt der computer- und informationsbezogenen Kompetenzen und das Konstrukt der Kompetenzen […]
Senkbeil, Martin; Eickelmann, Birgit; Vahrenhold, Jan; Goldhammer, Frank; Gerick, Julia; […]
Sammelbandbeitrag
| Aus: Eickelmann, Birgit; Bos, Wilfried; Gerick, Julia; Goldhammer, Frank; Schaumburg, Heike; Schwippert, Knut; Senkbeil, Martin; Vahrenhold, Jan (Hrsg.): ICILS 2018 #Deutschland - Computer- und informationsbezogene Kompetenzen von Schülerinnen und Schülern im zweiten internationalen Vergleich und Kompetenzen im Bereich Computational Thinking | Münster: Waxmann | 2019
39805 Endnote
Autor*innen:
Senkbeil, Martin; Eickelmann, Birgit; Vahrenhold, Jan; Goldhammer, Frank; Gerick, Julia; Labusch, Amelie
Titel:
Das Konstrukt der computer- und informationsbezogenen Kompetenzen und das Konstrukt der Kompetenzen im Bereich 'Computational Thinking' in ICILS 2018
Aus:
Eickelmann, Birgit; Bos, Wilfried; Gerick, Julia; Goldhammer, Frank; Schaumburg, Heike; Schwippert, Knut; Senkbeil, Martin; Vahrenhold, Jan (Hrsg.): ICILS 2018 #Deutschland - Computer- und informationsbezogene Kompetenzen von Schülerinnen und Schülern im zweiten internationalen Vergleich und Kompetenzen im Bereich Computational Thinking, Münster: Waxmann, 2019 , S. 79-111
URN:
urn:nbn:de:0111-pedocs-183215
URL:
http://nbn-resolving.org/urn:nbn:de:0111-pedocs-183215
Dokumenttyp:
4. Beiträge in Sammelwerken; Sammelband (keine besondere Kategorie)
Sprache:
Deutsch
Schlagwörter:
Anforderung; Informationsgesellschaft; Wissensgesellschaft; Konstruktion; Computerkenntnisse; Computernutzung; Informationskompetenz; Informationsverarbeitung; Informationsaustausch; Kommunikative Kompetenz; Kompetenz; Stufenmodell; Beispiel; Aufgabe; Informatik; Denken; Datenverarbeitung; Problemlösen; Modellierung; Algorithmus; Künstliche Intelligenz; Entwicklung; Konzeption; Schülerleistung; Schulleistung; Studie; Empirische Forschung; Bildungsforschung
Abstract:
[…] Die Rahmen- und Testkonzeption [der ICILS-Konstrukte der computer- und informationsbezogenen Kompetenzen, sowie derer im Bereich 'Computational Thinking'] werden im vorliegenden Kapitel entlang der internationalen Konzeption der Studie ausgeführt und erläutert. Die Konstrukte werden dazu zunächst jeweils hinsichtlich ihrer Relevanz für eine erfolgreiche Teilhabe an der Gesellschaft, u.a. hinsichtlich der Erfüllung beruflicher und persönlicher Zielstellungen, eingeordnet (Abschnitte 2.1 bzw. 3.1) und anschließend in den Abschnitten 2.2 und 3.2 inhaltlich auf der Grundlage des internationalen Forschungsdesigns der Studie konkretisiert. Daran anknüpfend wird das im Rahmen von ICILS 2018 geprüfte Kompetenzstufenmodell der computer- und informationsbezogenen Kompetenzen (Abschnitt 2.3) erläutert. Beide im Rahmen der Studie weiter- bzw. neuentwickelten theoretischen Konstrukte bilden im Rahmen von ICILS 2018 die zentrale Grundlage für die Entwicklung der Tests für Schülerinnen und Schüler in den jeweiligen Bereichen der computer- und informationsbezogenen Kompetenzen (Abschnitt 2.4) und der Kompetenzen im Bereich 'Computational Thinking' (Abschnitt 3.3). (DIPF/Orig.)
DIPF-Abteilung:
Bildungsqualität und Evaluation
Adaptive item selection under matroid constraints
Bengs, Daniel; Brefeld, Ulf; Kröhne, Ulf
Zeitschriftenbeitrag
| In: Journal of Computerized Adaptive Testing | 2018
38642 Endnote
Autor*innen:
Bengs, Daniel; Brefeld, Ulf; Kröhne, Ulf
Titel:
Adaptive item selection under matroid constraints
In:
Journal of Computerized Adaptive Testing, 6 (2018) 2, S. 15-36
DOI:
10.7333/1808-0602015
URN:
urn:nbn:de:0111-dipfdocs-166953
URL:
http://www.dipfdocs.de/volltexte/2020/16695/pdf/JCAT_2018_2_Bengs_Brefeld_Kroehne_Adaptive_item_selection_under_matroid_constraints_A.pdf
Dokumenttyp:
3a. Beiträge in begutachteten Zeitschriften; Aufsatz (keine besondere Kategorie)
Sprache:
Englisch
Schlagwörter:
Adaptives Testen; Algorithmus; Computerunterstütztes Verfahren; Itembank; Messverfahren; Technologiebasiertes Testen; Testkonstruktion
Abstract (english):
The shadow testing approach (STA; van der Linden & Reese, 1998) is considered the state of the art in constrained item selection for computerized adaptive tests. The present paper shows that certain types of constraints (e.g., bounds on categorical item attributes) induce a matroid on the item bank. This observation is used to devise item selection algorithms that are based on matroid optimization and lead to optimal tests, as the STA does. In particular, a single matroid constraint can be treated optimally by an efficient greedy algorithm that selects the most informative item preserving the integrity of the constraints. A simulation study shows that for applicable constraints, the optimal algorithms realize a decrease in standard error (SE) corresponding to a reduction in test length of up to 10% compared to the maximum priority index (Cheng & Chang, 2009) and up to 30% compared to Kingsbury and Zara's (1991) constrained computerized adaptive testing.
DIPF-Abteilung:
Bildungsqualität und Evaluation
Frame-based data factorizations
Mair, Sebastian; Boubekki, Ahcène; Brefeld, Ulf
Sammelbandbeitrag
| Aus: Precup, Doina; Teh, Yee Whye (Hrsg.): Proceedings of the International Conference on Machine Learning (IMCL 2017), 6-11 August 2017, International Convention Centre, Sydney, Australia | Red Hook; NY: Curran | 2017
37658 Endnote
Autor*innen:
Mair, Sebastian; Boubekki, Ahcène; Brefeld, Ulf
Titel:
Frame-based data factorizations
Aus:
Precup, Doina; Teh, Yee Whye (Hrsg.): Proceedings of the International Conference on Machine Learning (IMCL 2017), 6-11 August 2017, International Convention Centre, Sydney, Australia, Red Hook; NY: Curran, 2017 (Proceedings of Machine Learning Research, 70), S. 2305-2313
URL:
http://proceedings.mlr.press/v70/mair17a/mair17a.pdf
Dokumenttyp:
4. Beiträge in Sammelbänden; Tagungsband/Konferenzbeitrag/Proceedings
Sprache:
Englisch
Schlagwörter:
Algorithmus; Automatisierung; Computerlinguistik; Daten; Datenanalyse; Methode; Verfahren
Abstract:
Archetypal Analysis is the method of choice to compute interpretable matrix factorizations. Every data point is represented as a convex combination of factors, i.e., points on the boundary of the convex hull of the data. This renders computation inefficient. In this paper, we show that the set of vertices of a convex hull, the so-called frame, can be efficiently computed by a quadratic program. We provide theoretical and empirical results for our proposed approach and make use of the frame to accelerate Archetypal Analysis. The novel method yields similar reconstruction errors as baseline competitors but is much faster to compute. (DIPF/Orig.)
DIPF-Abteilung:
Informationszentrum Bildung
Which argument is more convincing? Analyzing and predicting convincingness of Web arguments using […]
Habernal, Ivan; Gurevych, Iryna
Sammelbandbeitrag
| Aus: Association for Computational Linguistics (Hrsg.): Proceedings of the 54th annual meeting of the Association for Computational Linguistics (ACL 2016): Long papers | Stroudsburg; PA: Association for Computational Linguistics | 2016
36970 Endnote
Autor*innen:
Habernal, Ivan; Gurevych, Iryna
Titel:
Which argument is more convincing? Analyzing and predicting convincingness of Web arguments using bidirectional LSTM
Aus:
Association for Computational Linguistics (Hrsg.): Proceedings of the 54th annual meeting of the Association for Computational Linguistics (ACL 2016): Long papers, Stroudsburg; PA: Association for Computational Linguistics, 2016 , S. 1589-1599
URL:
http://www.aclweb.org/anthology/P16-1150
Dokumenttyp:
4. Beiträge in Sammelbänden; Tagungsband/Konferenzbeitrag/Proceedings
Sprache:
Englisch
Schlagwörter:
Algorithmus; Argumentation; Automatisierung; Computerlinguistik; Kommunikation; Online; Prognose; Qualität; Rhetorik; Soziale Software; Textanalyse; Überzeugung; World wide web 2.0
Abstract (english):
We propose a new task in the field of computational argumentation in which we investigate qualitative properties of Web arguments, namely their convincingness. We cast the problem as relation classification, where a pair of arguments having the same stance to the same prompt is judged. We annotate a large datasets of 16k pairs of arguments over 32 topics and investigate whether the relation "A is more convincing than B" exhibits properties of total ordering; these findings are used as global constraints for cleaning the crowdsourced data. We propose two tasks: (1) predicting which argument from an argument pair is more convincing and (2) ranking all arguments to the topic based on their convincingness. We experiment with feature-rich SVM and bidirectional LSTM and obtain 0.76-0.78 accuracy and 0.35-0.40 Spearman's correlation in a cross-topic evaluation. We release the newly created corpus UKPConvArg1 and the experimental software under open licenses. (DIPF/Orig.)
DIPF-Abteilung:
Informationszentrum Bildung
Modeling extractive sentence intersection via subtree entailment
Levy, Omer; Dagan, Ido; Stanovsky, Gabriel; Eckle-Kohler, Judith; Gurevych, Iryna
Sammelbandbeitrag
| Aus: The COLING 2016 Organizing Committee (Hrsg.): Proceedings of the 26th International Conference on Computational Linguistics (COLING) | Osaka: The COLING 2016 Organizing Committee | 2016
36987 Endnote
Autor*innen:
Levy, Omer; Dagan, Ido; Stanovsky, Gabriel; Eckle-Kohler, Judith; Gurevych, Iryna
Titel:
Modeling extractive sentence intersection via subtree entailment
Aus:
The COLING 2016 Organizing Committee (Hrsg.): Proceedings of the 26th International Conference on Computational Linguistics (COLING), Osaka: The COLING 2016 Organizing Committee, 2016 , S. 2891-2901
URL:
http://www.aclweb.org/anthology/C/C16/C16-1272.pdf
Dokumenttyp:
4. Beiträge in Sammelbänden; Tagungsband/Konferenzbeitrag/Proceedings
Sprache:
Englisch
Schlagwörter:
Algorithmus; Computerlinguistik; Daten; Klassifikation; Semantik; Struktur; Syntax; Text
Abstract (english):
Sentence intersection captures the semantic overlap of two texts, generalizing over paradigms such as textual entailment and semantic text similarity. Despite its modeling power, it has received little attention because it is difficult for non-experts to annotate. We analyze 200 pairs of similar sentences and identify several underlying properties of sentence intersection. We leverage these insights to design an algorithm that decomposes the sentence intersection task into several simpler annotation tasks, facilitating the construction of a high quality dataset via crowdsourcing. We implement this approach and provide an annotated dataset of 1,764 sentence intersections. (DIPF/Orig.)
DIPF-Abteilung:
Informationszentrum Bildung
Medical concept embeddings via labeled background corpora
Mencía, Eneldo Loza; De Melo, Gerard; Nam, Jinseok
Sammelbandbeitrag
| Aus: European Language Resources Association (Hrsg.): Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016) | Portoroz: European Language Resources Association | 2016
37067 Endnote
Autor*innen:
Mencía, Eneldo Loza; De Melo, Gerard; Nam, Jinseok
Titel:
Medical concept embeddings via labeled background corpora
Aus:
European Language Resources Association (Hrsg.): Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016), Portoroz: European Language Resources Association, 2016 , S. 3629-3636
URL:
http://www.lrec-conf.org/proceedings/lrec2016/pdf/1190_Paper.pdf
Dokumenttyp:
4. Beiträge in Sammelwerken; Tagungsband/Konferenzbeitrag/Proceedings
Sprache:
Englisch
Schlagwörter:
Algorithmus; Automatisierung; Computerlinguistik; Medizin; Semantik; Sprache; Textanalyse
Abstract:
In recent years, we have seen an increasing amount of interest in low-dimensional vector representations of words. Among other things, these facilitate computing word similarity and relatedness scores. The most well-known example of algorithms to produce representations of this sort are the word2vec approaches. In this paper, we investigate a new model to induce such vector spaces for medical concepts, based on a joint objective that exploits not only word co-occurrences but also manually labeled documents, as available from sources such as PubMed. Our extensive experimental analysis shows that our embeddings lead to significantly higher correlations with human similarity and relatedness assessments than previous work. Due to the simplicity and versatility of vector representations, these findings suggest that our resource can easily be used as a drop-in replacement to improve any systems relying on medical concept similarity measures. (DIPF/Orig.)
DIPF-Abteilung:
Informationszentrum Bildung
Domain-specific corpus expansion with focused webcrawling
Remus, Steffen; Biemann, Chris
Sammelbandbeitrag
| Aus: European Language Resources Association (Hrsg.): Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016) | Portoroz: European Language Resources Association | 2016
37066 Endnote
Autor*innen:
Remus, Steffen; Biemann, Chris
Titel:
Domain-specific corpus expansion with focused webcrawling
Aus:
European Language Resources Association (Hrsg.): Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016), Portoroz: European Language Resources Association, 2016 , S. 3607-3611
URL:
http://www.lrec-conf.org/proceedings/lrec2016/pdf/316_Paper.pdf
Dokumenttyp:
4. Beiträge in Sammelwerken; Tagungsband/Konferenzbeitrag/Proceedings
Sprache:
Englisch
Schlagwörter:
Algorithmus; Automatisierung; Bildung; Computerlinguistik; Data Mining; Hypertext; Modell; Sprache; Text; Textanalyse
Abstract:
This work presents a straightforward method for extending or creating in-domain web corpora by focused webcrawling. The focused webcrawler uses statistical N-gram language models to estimate the relatedness of documents and weblinks and needs as input only N-grams or plain texts of a predefined domain and seed URLs as starting points. Two experiments demonstrate that our focused crawler is able to stay focused in domain and language. The first experiment shows that the crawler stays in a focused domain, the second experiment demonstrates that language models trained on focused crawls obtain better perplexity scores on in-domain corpora. We distribute the focused crawler as open source software. (DIPF/Orig.)
DIPF-Abteilung:
Informationszentrum Bildung
Multi-view learning with dependent views
Brefeld, Ulf
Sammelbandbeitrag
| Aus: ACM (Hrsg.): Proceedings of the ACM/SIGAPP Symposium on Applied Computing | New York: Association for Computing Machinery | 2015
35621 Endnote
Autor*innen:
Brefeld, Ulf
Titel:
Multi-view learning with dependent views
Aus:
ACM (Hrsg.): Proceedings of the ACM/SIGAPP Symposium on Applied Computing, New York: Association for Computing Machinery, 2015 , S. 1-6
URL:
https://www.kma.informatik.tu-darmstadt.de/fileadmin/user_upload/Group_KMA/kma_publications/sac2015.pdf
Dokumenttyp:
4. Beiträge in Sammelwerken; Tagungsband/Konferenzbeitrag/Proceedings
Sprache:
Englisch
Schlagwörter:
Algorithmus; Computerprogramm; Daten; Klassifikation; Lernen; Text
Abstract:
Multi-view algorithms, such as co-training and co-EM, utilize unlabeled data when the available attributes can be split into independent and compatible subsets. Experiments have shown that multi-view learning is sometimes beneficial for problems for which the independence assumption is not satisfied. In practice, unfortunately, it is not possible to measure the dependency between two attribute sets; hence, there is no criterion which allows to decide whether multi-view learning is applicable. We conduct experiments with various text classification problems and investigate on the effectiveness of the co-trained SVM and the co-EM SVM under various conditions, including violations of the independence 0assumption. We identify the error correlation coefficient of the initial classifiers as an elaborate indicator of the expected benefit of multi-view learning. (DIPF/Orig.)
DIPF-Abteilung:
Informationszentrum Bildung
Markierungen aufheben
Alle Treffer markieren
Export
1
(aktuell)
2
3
>
Alle anzeigen
(30)