The Effect of Explanation Styles on User’s Trust

Larasati, Retno; De Liddo, Anna and Motta, Enrico (2020). The Effect of Explanation Styles on User’s Trust. In: Proceedings of IUI workshop on Explainable Smart Systems and Algorithmic Transparency in Emerging Technologies, CEUR Workshop Proceedings, 2582.

URL: https://ceur-ws.org/Vol-2582/paper6.pdf

Abstract

This paper investigates the effects that different styles of textual explanation have on explainee’s trust in an AI medical support scenario. From the literature, we focused on four different styles of explanation: contrastive, general, truthful, and thorough. We conducted a user study in which we presented explanations of a fictional mammography diagnosis application system to 48 non-expert users. We carried out a between-subject comparison between four groups of 11-13 people each looking at a different explanation style. Our findings suggest that contrastive and thorough explanations produce higher personal attachment trust scores compared to general explanation style, while truthful explanation shows no difference compared to the rest of explanations. This means that users who received contrastive and thorough explanation types found the explanation given significantly more agreeable and suiting their personal taste. These findings, even though not conclusive, confirm the impact of explanation style on users trust towards AI systems and may inform future explanation design and evaluation studies.

Viewing alternatives

Download history

Item Actions

Export

About