Making sense of online discussions: can automated reports help?

Anastasiou, Lucas and De Liddo, Anna (2021). Making sense of online discussions: can automated reports help? In: CHI EA '21: Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, ACM pp. 1–7.

DOI: https://doi.org/10.1145/3411763.3451815

Abstract

Enabling healthier online deliberation around issues of public concerns is an increasingly vital challenge in nowadays society. Two fundamental components of a healthier deliberation are: i. the capability of people to make sense of what they read, so that their contribution can be relevant; and ii. the improvement of the overall quality of the debate, so that noise can be reduced and useful signals can inform collective decision making. Platform designers often resort to computational aids to improve these two processes. In this paper, we examine automated reporting as promising mean of improving sensemaking in discussion platforms. We compared three approaches to automated reporting: an abstractive summariser, a template report and an argumentation highlighting system. We then evaluated improvements in sensemaking of participants and the perception on overall quality of the debate. The study suggests that argument mining technologies are particularly promising computational aids to improve sense making and perceived quality of online discussion, thanks to their capability to combine computational models for automated reasoning with users’ cognitive needs and expectation of automated reporting.

Viewing alternatives

Metrics

Public Attention

Altmetrics from Altmetric

Number of Citations

Citations from Dimensions
No digital document available to download for this item

Item Actions

Export

About