Lin, Chenghua; He, Yulan and Everson, Richard
|Google Scholar:||Look up in Google Scholar|
This paper presents a hierarchical Bayesian model based on latent Dirichlet allocation (LDA), called subjLDA, for sentence-level subjectivity detection, which automatically identiﬁes whether a given sentence expresses opinion or states facts. In contrast to most of the existing methods relying on either labelled corpora for classiﬁer training or linguistic pattern extraction for subjectivity classiﬁcation, we view the problem as weakly-supervised generative model learning, where the only input to the model is a small set of domain independent subjectivity lexical clues. A mechanism is introduced to incorporate the prior information about the subjectivity lexical clues into model learning by modifying the Dirichlet priors of topic-word distributions. The subjLDA model has been evaluated on the Multi-Perspective Question Answering (MPQA) dataset and promising results have been observed in the preliminary experiments. We have also explored adding neutral words as prior information for model learning. It was found that while incorporating subjectivity clues bearing positive or negative polarity can achieve a signiﬁcant performance gain, the prior lexical information from neutral words is less effective.
|Item Type:||Conference Item|
|Copyright Holders:||2011 AFNLP|
|Extra Information:||Proceedings of the 5th International Joint Conference on Natural Language Processing, Chiang Mai, Thailand, November 8 – 13, 2011.
|Academic Unit/Department:||Knowledge Media Institute|
|Interdisciplinary Research Centre:||Centre for Research in Computing (CRC)|
|Depositing User:||Kay Dave|
|Date Deposited:||10 Nov 2011 10:53|
|Last Modified:||20 Jun 2014 04:49|
|Share this page:|
Actions (login may be required)
|Report issue / request change|