What makes a convincing argument?
Empirical analysis and detecting attributes of convincingness in Web argumentation
In: Association for Computational Linguistics (Hrsg.): Proceedings of the 2016 conference on Empirical Methods in Natural Language Processing (EMNLP)
Stroudsburg, PA :
Association for Computational Linguistics
URL des Volltextes:
4. Beiträge in Sammelwerken; Tagungsband/Konferenzbeitrag/Proceedings
This article tackles a new challenging task in computational argumentation. Given a pair of two arguments to a certain controversial topic, we aim to directly assess qualitative properties of the arguments in order to explain why one argument is more convincing than the other one. We approach this task in a fully empirical manner by annotating 26k explanations written in natural language. These explanations describe convincingness of arguments in the given argument pair, such as their strengths or flaws. We create a new crowd-sourced corpus containing 9,111 argument pairs, multi-labeled with 17 classes, which was cleaned and curated by employing several strict quality measures. We propose two tasks on this data set, namely (1) predicting the full label distribution and (2) classifying types of flaws in less convincing arguments. Our experiments with feature-rich SVM learners and Bidirectional LSTM neural networks with convolution and attention mechanism reveal that such a novel fine-grained analysis of Web argument convincingness is a very challenging task. We release the new UKPConvArg2 corpus and software under permissive licenses to the research community. (DIPF/Orig.)