Dataset Annotation in Abusive Language Detection
Paula Fortuna, Juan Soler-Company & Leo Wanner
DOI 10.48541/dcr.v12.26 (SSOAR)
Abstract: The last decade saw the rise of research in the area of hate speech and abusive language detection. A lot of research has been conducted, with further datasets being introduced and new models put forward. However, contrastive studies of the annotation of different datasets also revealed that some problematic issues remain. Theoretically ambiguous and misleading definitions between different studies make it more difficult to evaluate model reproducibility and generalizability and require additional steps for dataset standardization. To overcome these challenges, the field needs a common understanding of concepts and problems such that standard datasets and different compatible approaches can be developed, avoiding inefficient and redundant research. This article attempts to identify persistent challenges and develop guidelines to help future annotation tasks. Some of the challenges and guidelines identified and discussed in the article relate to concept subjectivity, focus on overt hate speech, dataset integrity and lack of ethical considerations.
Fortuna, P., Soler-Company, J., & Wanner, L. (2023). Dataset annotation in abusive language detection. In C. Strippel, S. Paasch-Colberg, M. Emmer, & J. Trebbe (Eds.), Challenges and perspectives of hate speech research (pp. 443–464). Digital Communication Research. https://doi.org/10.48541/dcr.v12.26
This book is published open access and licensed under Creative Commons Attribution 4.0 (CC-BY 4.0).
The persistent long-term archiving of this book is carried out with the help of the Social Science Open Access Repository and the university library of Freie Universität Berlin (Refubium).