Copy the page URI to the clipboard
Farrell, Tracie and Kouadri Mostéfaoui, Soraya
(2023).
URL: https://ceur-ws.org/Vol-3456/short3-1.pdf
Abstract
The idea of a protected characteristic is supposedly based on the evidence of discrimination against a group of people associated with that characteristic or a combination of those characteristics. However, this determination is political and evolves over time as existing forms of discrimination are recognised and new forms emerge. All the while, these notions are also rooted in colonial practices and legacies of colonialism that create and re-create injustice and discrimination against those same “protected” groups. Automated hate-speech detection software is based typically on those political definitions of hate, which are then codified in law. Moreover, the law tends to focus on classes of characteristics (e.g. gender, ethnicity), rather than specific characteristics that are particularly targeted by discrimination and hate (being a woman, being Indigenous, Black, Asian, etc.). In this paper, we explore some of the implications of this for hate speech detection, particularly that supported with Artificial Intelligence (AI), and for groups that experience a significant amount of prejudicial hate online.
Viewing alternatives
Download history
Item Actions
Export
About
- Item ORO ID
- 93723
- Item Type
- Conference or Workshop Item
- ISSN
- 1613-0073
- Keywords
- hate speech detection; protected characteristics; colonialism; artificial intelligence; social justice
- Academic Unit or School
-
Faculty of Science, Technology, Engineering and Mathematics (STEM) > Knowledge Media Institute (KMi)
Faculty of Science, Technology, Engineering and Mathematics (STEM)
Faculty of Science, Technology, Engineering and Mathematics (STEM) > Computing and Communications - Copyright Holders
- © 2023 The Authors
- Related URLs
-
- https://ceur-ws.org/Vol-3456/(Publication)
- Depositing User
- ORO Import