Copy the page URI to the clipboard
Larasati, Retno
(2020).
Abstract
With Artificial Intelligence (AI) systems being implemented everywhere, including in healthcare, the interaction with AI embedded system is inevitable. However, modern AI algorithms are complex and difficult to understand. There are different types of user that possibly interacting with AI healthcare application: Medical Professional (doctor), AI/Machine Learning (ML) Experts (system developer), and Laypeople (patient). The lack of meaningful and usable user interfaces for XAI makes it hard to effectively assess the impact of explanation on end-users, which furthers hinder our capability to test hypothesis and advance our understanding of this complex research field. In this abstract, we are trying to investigate how users, especially non-expert users, understand an explanation given by an AI application, and recognise possible room of improvement.