Investigating Vividness Bias in Language Models Through Art Interpretations

Samela, Laura; Daga, Enrico and Mulholland, Paul (2024). Investigating Vividness Bias in Language Models Through Art Interpretations. In: ELMKE: Evaluation of Language Models in Knowledge Engineering 1st Workshop co-located with EKAW-24 (Zhang, Bohui; Alharbi, Reham and He, Yuan eds.), 26 Nov 2024, Amsterdam, Netherlands, CEUR WS.

Abstract

Large language models (LLMs) play a crucial role in applications that require to tailor the content to user backgrounds and perspectives. In the context of cultural engagement, these models hold the promise of tailoring art interpretations to diverse audiences. However, LLMs are known to generate biased content, therefore, perpetrating stereotypes and inequality. Knowledge engineering methodologies can support the systematic observation of generative AI outputs. In this paper, we propose a method to identify these biases through persona-based prompting. Crucially, we find evidence of vividness bias, a known phenomenon in social psychology where our decisions are driven by specific aspects in a given situation. Therefore, we pose the question of investigating such bias systematically and propose a method based on in-context learning with pairwise association of persona features. Next, we represent LLM behaviour as a decision tree, to capture detailed evidence of bias. We investigate this phenomenon with artworks from the Irish Museum of Modern Art (IMMA) and Google Bard, focusing on features such as gender, race, age, profession, and sexual orientation. We discuss our findings and identify opportunities and challenges when dealing with vividness bias in persona-based, generated art interpretations.

Viewing alternatives

Download history

Item Actions

Export

About