Copy the page URI to the clipboard
Kwarteng, Joseph
(2024).
DOI: https://doi.org/10.21954/ou.ro.00097101
Abstract
In an era of unprecedented digital connectivity, social networking sites have become a global crossroads of cultures, nations, and ethnicities. Yet, this digital expansion has been paralleled by the proliferation of hate speech, particularly targeted at specific genders and ethnicities. The speed at which digital networks operate allows hate speech to spread more rapidly than ever before, crossing geographical, social, and political boundaries with ease and often compounding the harm it causes. This scalability of hate speech through these digital platforms presents a formidable challenge in monitoring and moderating online spaces to protect vulnerable groups.
In this thesis, we investigate the phenomenon of "Misogynoir", a specific form of intersectional hate speech that combines racial and gender-based prejudice and is directed at Black women. We present a comprehensive study on Misogynoir by converging methodologies from qualitative and quantitative analysis to investigate and highlight the challenges and nuances in identifying and understanding misogynoir. We start by conducting an extensive literature review of the subject, identifying the models of misogynoir. Based on these models, we created a lexicon of terms and expressions to examine further the online presence of this type of hate, particularly targeting Black women within the Science and Technology sector through a detailed analysis of public responses to their experiences of misogynoir shared on X (formerly Twitter). However, given the nuanced and context-dependent nature of misogynoir, which the lexicon approach struggled to capture effectively, we proceeded to assess the effectiveness of the current state-of-the-art automated hate speech detection tools. Our evaluation spanned across specially curated datasets, composed of sampled tweets that potentially exemplified misogynoir and those that displayed support towards Black women. This analysis aimed to determine the proficiency of these tools in identifying the multifaceted expressions of misogynoir within online discourse. We further delved into the impact of annotators' identities and lived experiences on their perception and labelling of potential misogynoir instances, recognising that automated detection systems are trained on datasets annotated by humans. By conducting an in-depth qualitative analysis of the justifications provided by annotators from four distinct demographic groups, it became evident that an annotator's background profoundly influences their content interpretation. Findings from this work shed light on annotator behaviour and their diverse rationales for annotating intersectional hate, and how identity and lived experiences influence labelling decisions. These highlight the inadequacies of present automated hate speech detection tools in detecting misogynoir, setting the stage for future technological improvements. We emphasise the necessity for more advanced, context-sensitive tools tailored to the unique challenges encountered by Black women on digital steering us toward a more just and equitable online environment.