Copy the page URI to the clipboard
Reyero Lobo, Paula; Daga, Enrico and Alani, Harith
(2022).
URL: https://www.icwsm.org/2022/index.html/
Abstract
Due to the rise in toxic speech on social media and other online platforms, there is a growing need for systems that could automatically flag or filter such content. Various supervised machine learning approaches have been proposed, trained from manually-annotated toxic speech corpora. However, annotators sometimes struggle to judge or to agree on which text is toxic and which group is being targeted in a given text. This could be due to bias, subjectivity, or unfamiliarity with used terminology (e.g. domain language, slang). In this paper, we propose the use of a knowledge graph to help in better understanding such toxic speech annotation issues. Our empirical results show that 3 in a sample of 19k texts mention terms associated with frequently attacked gender and sexual orientation groups that were not correctly identified by the annotators.
Viewing alternatives
Download history
Item Actions
Export
About
- Item ORO ID
- 82776
- Item Type
- Conference or Workshop Item
- Project Funding Details
-
Funded Project Name Project ID Funding Body NoBIAS-Artificial Intelligence without Bias 860630 European Union’s Horizon 2020 research and innovation programme under Marie Sklodowska-Curie Actions - Academic Unit or School
-
Faculty of Science, Technology, Engineering and Mathematics (STEM) > Knowledge Media Institute (KMi)
Faculty of Science, Technology, Engineering and Mathematics (STEM) - Research Group
- Social Data Science
- Copyright Holders
- © 2022 Association for the Advancement of Artificial Intelligence
- Depositing User
- Paula Reyero Lobo