Copy the page URI to the clipboard
Samela, Laura; Daga, Enrico and Mulholland, Paul
(2024).
Abstract
Large language models (LLMs) play a crucial role in applications that require to tailor the content to user backgrounds and perspectives. In the context of cultural engagement, these models hold the promise of tailoring art interpretations to diverse audiences. However, LLMs are known to generate biased content, therefore, perpetrating stereotypes and inequality. Knowledge engineering methodologies can support the systematic observation of generative AI outputs. In this paper, we propose a method to identify these biases through persona-based prompting. Crucially, we find evidence of vividness bias, a known phenomenon in social psychology where our decisions are driven by specific aspects in a given situation. Therefore, we pose the question of investigating such bias systematically and propose a method based on in-context learning with pairwise association of persona features. Next, we represent LLM behaviour as a decision tree, to capture detailed evidence of bias. We investigate this phenomenon with artworks from the Irish Museum of Modern Art (IMMA) and Google Bard, focusing on features such as gender, race, age, profession, and sexual orientation. We discuss our findings and identify opportunities and challenges when dealing with vividness bias in persona-based, generated art interpretations.