Eliminating Contextual Bias in Aspect-based Sentiment Analysis

An, Ruize; Zhang, Chen and Song, Dawei (2024). Eliminating Contextual Bias in Aspect-based Sentiment Analysis. In: Advances in Information Retrieval (Goharian, Nazli; Tonellotto, Nicola; He, Yulan; Lipani, Aldo; McDonald, Graham; Macdonald, Craig and Ounis, Iadh eds.), ECIR 2024. Lecture Notes in Computer Science (LNCS), vol. 14608, Springer, Cham, Switzerland, pp. 90–107.

DOI: https://doi.org/10.1007/978-3-031-56027-9_6

Abstract

Pretrained language models (LMs) have made remarkable achievements in aspect-based sentiment analysis (ABSA). However, it is discovered that these models may struggle in some particular cases (e.g., to detect sentiments expressed towards targeted aspects with only implicit or adversarial expressions). Since it is hard for models to align implicit or adversarial expressions with their corresponding aspects, the sentiments of the targeted aspects would largely be impacted by the expressions towards other aspects in the sentence. We name this phenomenon as contextual bias. To tackle the problem, we propose a flexible aspect-oriented debiasing method (ARDE) to eliminate the harmful contextual bias without the need of adjusting the underlying LMs. Intuitively, ARDE calibrates the prediction towards the targeted aspect by subtracting the bias towards the context. Favorably, ARDE can get theoretical support from counterfactual reasoning theory. Experiments are conducted on SemEval benchmark, and the results show that ARDE can empirically improve the accuracy on contextually biased aspect sentiments without degrading the accuracy on unbiased ones. Driven by recent success of large language models (LLMs, e.g., ChatGPT), we further uncover that even LLMs can fail to address certain contextual bias, which yet can be effectively tackled by ARDE.

Viewing alternatives

Download history

Metrics

Public Attention

Altmetrics from Altmetric

Number of Citations

Citations from Dimensions

Item Actions

Export

About