Copy the page URI to the clipboard
Ayobi, A.; Stawarz, K.; Katz, D.; Marshall, P.; Yamagata, T.; Santos-Rodríguez, R.; Flach, P. and O'Kane, A. A.
(2021).
URL: http://ceur-ws.org/Vol-2903/IUI21WS-TExSS-3.pdf
Abstract
Understanding artificial intelligence (AI) and machine learning (ML) approaches is becoming increasingly important for people with a wide range of professional backgrounds. However, it is unclear how ML concepts can be effectively explained as part of human-centred and multidisciplinary design processes. We provide a qualitative account of how AI researchers explained and non-experts perceived ML concepts as part of a co-design project that aimed to inform the design of ML applications for diabetes self-care. We identify benefits and challenges of explaining ML concepts with analogical narratives, information visualisations, and publicly available videos. Co-design participants reported not only gaining an improved understanding of ML concepts but also highlighted challenges of understanding ML explanations, including misalignments between scientific models and their lived self-care experiences and individual information needs. We frame our findings through the lens of Stars and Griesemer’s concept of boundary objects to discuss how the presentation of user-centred ML explanations could strike a balance between being plastic and robust enough to support design objectives and people’s individual information needs.
Viewing alternatives
Download history
Item Actions
Export
About
- Item ORO ID
- 78477
- Item Type
- Conference or Workshop Item
- Academic Unit or School
-
Faculty of Science, Technology, Engineering and Mathematics (STEM) > Computing and Communications
Faculty of Science, Technology, Engineering and Mathematics (STEM) - Depositing User
- ORO Import