Copy the page URI to the clipboard
Larasati, Retno
Abstract
With Artificial Intelligence (AI) systems being implemented everywhere, including in healthcare, the interaction with AI embedded system is inevitable. However, modern AI algorithms are complex and difficult to understand. There are different types of user that possibly interacting with AI healthcare application: Medical Professional (doctor), AI/Machine Learning (ML) Experts (system developer), and Laypeople (patient). The lack of meaningful and usable user interfaces for XAI makes it hard to effectively assess the impact of explanation on end-users, which furthers hinder our capability to test hypothesis and advance our understanding of this complex research field. In this abstract, we are trying to investigate how users, especially non-expert users, understand an explanation given by an AI application, and recognise possible room of improvement.
Viewing alternatives
Download history
Item Actions
Export
About
- Item ORO ID
- 79837
- Item Type
- Conference or Workshop Item
- Academic Unit or School
-
Faculty of Science, Technology, Engineering and Mathematics (STEM) > Knowledge Media Institute (KMi)
Faculty of Science, Technology, Engineering and Mathematics (STEM) - Depositing User
- Retno Larasati