False Hopes in Automated Abuse Detection (Short Paper)

Farrell, Tracie and Kouadri Mostéfaoui, Soraya (2023). False Hopes in Automated Abuse Detection (Short Paper). In: CEUR Workshop Proceedings of the Workshops at the Second International Conference on Hybrid Human-Artificial Intelligence (HHAI 2023) (Murukannaiah, Pradeep K. and Hirzle, Teresa eds.), CEUR Workshop Proceedings (CEUR-WS.org), 3456 pp. 109–118.

URL: https://ceur-ws.org/Vol-3456/short3-1.pdf

Abstract

The idea of a protected characteristic is supposedly based on the evidence of discrimination against a group of people associated with that characteristic or a combination of those characteristics. However, this determination is political and evolves over time as existing forms of discrimination are recognised and new forms emerge. All the while, these notions are also rooted in colonial practices and legacies of colonialism that create and re-create injustice and discrimination against those same “protected” groups. Automated hate-speech detection software is based typically on those political definitions of hate, which are then codified in law. Moreover, the law tends to focus on classes of characteristics (e.g. gender, ethnicity), rather than specific characteristics that are particularly targeted by discrimination and hate (being a woman, being Indigenous, Black, Asian, etc.). In this paper, we explore some of the implications of this for hate speech detection, particularly that supported with Artificial Intelligence (AI), and for groups that experience a significant amount of prejudicial hate online.

Viewing alternatives

Download history

Item Actions

Export

About