Copy the page URI to the clipboard
Larasati, Retno
(2023).
DOI: https://doi.org/10.21954/ou.ro.00015aca
Abstract
The way in which Artificial Intelligence (AI) systems reach conclusions is not always transparent to end-users, whether experts or non-experts. This creates serious concerns on the trust that people would place in such systems if they were to be adopted in real-life contexts. These concerns become even bigger when individuals’ well-being is at stake, as in the case of AI technologies applied to healthcare. An emerging research area called Explainable AI (XAI) looks at how to solve this problem by providing a layer of explanation which helps end-users to make sense of AI results. The overall assumption behind XAI research is that explicability can improve end-users’ trust in AI systems. Trusting AI applications may have strong positive economical and societal impact, especially because AI is increasingly demonstrating improved performance in reducing the cost of carrying out highly complex human tasks. However, there are also the over-trusting and under- trusting issues that need to be addressed. Non-expert users have been shown to often over-trust or under-trust AI systems, even when having very little knowledge of the technical competence of the system. Over-trust can have dangerous societal consequences when trust is placed in systems of low or unclear technical competence. Meanwhile, under-trust can hinder AI systems adoption in our every day life.
This doctoral research studies the extent to which explanations and interactions can help non- expert users properly calibrate trust in AI systems, specifically AI for disease detection and preliminary diagnosis. This means reducing trust when users tend to over-trust an unreliable system and increasing trust if the system can be shown to work well. Four user studies were conducted using data collection methods that included literature review, semi-structured interviews, online surveys, and focus groups, following both qualitative and quantitative research approaches and involving medical professionals, AI experts, and non-experts (considered as primary users of the AI system). Through these four user studies, new key features of meaningful explanation were defined, concrete guidelines for designing meaningful explanation were proposed, a new tool for quantitative measurement of trust between humans and AI was generated, and a series of reflections on the complex relationship between explanation and trust were presented.
This doctoral work makes three fundamental contributions to knowledge, that can shape future research in Explainable AI in healthcare. First, it informs how to construct explanations that non-expert users can make sense of (meaningful explanations). Second, it contextualises current XAI research in healthcare, informing how explanations should be designed for AI assisted disease detection and preliminary diagnosis systems (Explanation Design Guidelines). Third, it proposes the first validated survey instrument to measure non-expert users trust in AI healthcare applications. This user-friendly survey method can help future XAI researchers compare results and potentially accelerate the development of more robust XAI research. Finally, this doctoral research provides preliminary insights into the importance of the interaction modality of explanations in influencing trust. Audio-based conversational interaction has been identified as a more promising way to provide health diagnosis explanations to patients than more static, hypertext-based interactions; audio-based conversational XAI interfaces positively affect the ability of laypersons to appropriately calibrate trust to a greater extent than less interactive interfaces. These preliminary findings can inform and promote future research on XAI by shifting the focus of current research from explanation content design to explanation delivery and interaction design.