Machine Learning Explanations as Boundary Objects: How AI Researchers Explain and Non-Experts Perceive Machine Learning

Ayobi, A.; Stawarz, K.; Katz, D.; Marshall, P.; Yamagata, T.; Santos-Rodríguez, R.; Flach, P. and O'Kane, A. A. (2021). Machine Learning Explanations as Boundary Objects: How AI Researchers Explain and Non-Experts Perceive Machine Learning. In: 2021 Joint ACM Conference on Intelligent User Interfaces Workshops, ACMIUI-WS 2021, 13-17 Apr 2021, College Station.

URL: http://ceur-ws.org/Vol-2903/IUI21WS-TExSS-3.pdf

Abstract

Understanding artificial intelligence (AI) and machine learning (ML) approaches is becoming increasingly important for people with a wide range of professional backgrounds. However, it is unclear how ML concepts can be effectively explained as part of human-centred and multidisciplinary design processes. We provide a qualitative account of how AI researchers explained and non-experts perceived ML concepts as part of a co-design project that aimed to inform the design of ML applications for diabetes self-care. We identify benefits and challenges of explaining ML concepts with analogical narratives, information visualisations, and publicly available videos. Co-design participants reported not only gaining an improved understanding of ML concepts but also highlighted challenges of understanding ML explanations, including misalignments between scientific models and their lived self-care experiences and individual information needs. We frame our findings through the lens of Stars and Griesemer’s concept of boundary objects to discuss how the presentation of user-centred ML explanations could strike a balance between being plastic and robust enough to support design objectives and people’s individual information needs.

Viewing alternatives

Download history

Item Actions

Export

About

Recommendations