The Accuracy Trap or
How to Build a Phony Classifier
DOI 10.48541/dcr.v12.22 (SSOAR)
Abstract: This guide explains, in four steps, how to build a phony text classifier using supervised machine learning—a classifier that is absolutely unreliable but looks outwardly sophisticated and attractive. You might enjoy this text if one or more of the following statements apply to you: You are interested in the automated identification of hate speech or related content in online discussions, as long as it looks good; you want to do something with machine learning to impress your peer group, but you do not have the nerve to dig deep into this field as well; you are either a somewhat sneaky or a humorous person. Of course, however, if you are a good and decent researcher, you might also take hints from this text on how not to step into the accuracy trap and how not to fall for the tricks of phony classification.
Anke Stoll is research associate at the Institute for Social Sciences at the Heinrich Heine University in Düsseldorf, Germany.
Stoll, A. (2023). The accuracy trap or How to build a phony classifier. In C. Strippel, S. Paasch-Colberg, M. Emmer, & J. Trebbe (Eds.), Challenges and perspectives of hate speech research (pp. 371–381). Digital Communication Research. https://doi.org/10.48541/dcr.v12.22
This book is published open access and licensed under Creative Commons Attribution 4.0 (CC-BY 4.0).
The persistent long-term archiving of this book is carried out with the help of the Social Science Open Access Repository and the university library of Freie Universität Berlin (Refubium).