The effect of explanation styles on user's trust

Larasati, Retno; De Liddo, Anna and Motta, Enrico (2020). The effect of explanation styles on user's trust. In: 2020 Workshop on Explainable Smart Systems for Algorithmic Transparency in Emerging Technologies, 17 Mar 2020, Cagliari, Italy.


This paper investigates the effects that different styles of textual explanation have on explainee’s trust in an AI medical support scenario. From the literature, we focused on four different styles of explanation: contrastive, general, truthful, and thorough. We conducted a user study in which we presented explanations of a fictional mammography diagnosis application system to 48 nonexpert users. We carried out a between-subject comparison between four groups of 11-13 people each looking at a different explanation style. Our findings suggest that contrastive and thorough explanations produce higher personal attachment trust scores compared to general explanation style, while truthful explanation shows no difference compared to the rest of explanations. This means that users who received contrastive and thorough explanation types found the explanation given significantly more agreeable and suiting their personal taste. These findings, even though not conclusive, confirm the impact of explanation style on users trust towards AI systems and may inform future explanation design and evaluation studies

Viewing alternatives

Download history

Item Actions