Supporting Online Toxicity Detection with Knowledge Graphs

Reyero Lobo, Paula; Daga, Enrico and Alani, Harith (2022). Supporting Online Toxicity Detection with Knowledge Graphs. In: Sixteenth International AAAI Conference on Web and Social Media, 6-9 Jun 2022, Atlanta, Georgia.




Due to the rise in toxic speech on social media and other online platforms, there is a growing need for systems that could automatically flag or filter such content. Various supervised machine learning approaches have been proposed, trained from manually-annotated toxic speech corpora. However, annotators sometimes struggle to judge or to agree on which text is toxic and which group is being targeted in a given text. This could be due to bias, subjectivity, or unfamiliarity with used terminology (e.g. domain language, slang). In this paper, we propose the use of a knowledge graph to help in better understanding such toxic speech annotation issues. Our empirical results show that 3\% in a sample of 19k texts mention terms associated with frequently attacked gender and sexual orientation groups that were not correctly identified by the annotators.

Viewing alternatives

Download history


Public Attention

Altmetrics from Altmetric

Number of Citations

Citations from Dimensions

Item Actions