Copy the page URI to the clipboard
Burel, Grégoire; Tavakoli, Mohammadali and Alani, Harith
(2024).
DOI: https://doi.org/10.1002/aaai.12180
Abstract
Correcting misinformation is a complex task, influenced by various psychological, social, and technical factors. Most research evaluation methods for identifying effective correction approaches tend to rely on either crowdsourcing, questionnaires, lab‐based simulations, or hypothetical scenarios. However, the translation of these methods and findings into real‐world settings, where individuals willingly and freely disseminate misinformation, remains largely unexplored. Consequently, we lack a comprehensive understanding of how individuals who share misinformation in natural online environments would respond to corrective interventions. In this study, we explore the effectiveness of corrective messaging on 3898 users who shared misinformation on Twitter/X over 2 years. We designed and deployed a bot to automatically identify individuals who share misinformation and subsequently alert them to related fact‐checks in various message formats. Our analysis shows that only a small minority of users react positively to the corrective messages, with most users either ignoring them or reacting negatively. Nevertheless, we also found that more active users were proportionally more likely to react positively to corrections and we observed that different message tones made particular user groups more likely to react to the bot.