← Volume 12: Challenges and Perspectives of Hate Speech Research


Machines Do Not Decide Hate Speech

Machine learning, power, and the intersectional approach

Jae Yeon Kim

Berlin, 2023
DOI 10.48541/dcr.v12.21 (SSOAR)

Abstract: The advent of social media has increased digital content—and, with it, hate speech. Advancements in machine learning help detect online hate speech at scale, but scale is only one part of the problem related to moderating it. Ma-chines do not decide what comprises hate speech, which is part of a societal norm. Power relations establish such norms and, thus, determine who can say what comprises hate speech. Without considering this data-generation process, a fair automated hate speech detection system cannot be built. This chapter first examines the relationship between power, hate speech, and machine learning. Then, it examines how the intersectional lens—focusing on power dynamics between and within social groups—helps identify bias in the data sets used to build automated hate speech detection systems.


Jae Yeon Kim is Assistant Professor of Data Science at the KDI School of Public Policy and Management, South Korea, and an affiliated researcher of the SNF Agora Institute at Johns Hopkins University, USA. ORCID logo

Kim, J. Y. (2023). Machines do not decide hate speech: Machine learning, power, and the intersectional approach. In C. Strippel, S. Paasch-Colberg, M. Emmer, & J. Trebbe (Eds.), Challenges and perspectives of hate speech research (pp. 355–369). Digital Communication Research. https://doi.org/10.48541/dcr.v12.21

This book is published open access and licensed under Creative Commons Attribution 4.0 (CC-BY 4.0).
The persistent long-term archiving of this book is carried out with the help of the Social Science Open Access Repository and the university library of Freie Universität Berlin (Refubium).