Copy the page URI to the clipboard
Sharma, Nirwan; Colucci-Gray, Laura; van der Wal, Rene and Siddharthan, Advaith
(2022).
DOI: https://doi.org/10.1145/3555535
Abstract
A number of initiatives invite members of the public to perform online classification tasks such as identifying objects or wildlife in images. These tasks are crucial to numerous large-scale Citizen Science projects in different disciplines, with volunteers using their knowledge and online support tools to, for example, identify species of wildlife or classify galaxies by their shapes. However, for complex classification tasks, such as this case study on identifying the species of bumblebee, reaching an agreement between volunteers - or even between experts - may require consensus-building processes. Collaboration and teamwork approaches to problem solving and decision-making, have been widely documented to improve both task performance and user learning in the real world. Most of these processes and projects are mediated online through feedback delivered in an asynchronous manner, and this article thus addresses a central research question: How do participants involved in species identification tasks respond to different forms of feedback provided in online collaboration, designed to support peer-learning and improve task performance? We tested four different approaches to feedback within a collaboration task, where participants reviewed their previously annotated data based on information curated from their peers on a long running online citizen science initiative. The selected interfaces have a strong foundation in social science and psychology literature and can be applied to citizen science practices as well as in other online communities. Results showed that while all four approaches increased accuracy, there were differences based on the types of consensus that existed before collaboration. Such differences highlighted the usefulness of different forms of feedback during collaboration for increasing data accuracy of identification and furthering users’ expertise on identification tasks. We found that anonymised and goal directed free-text comments posted on social learning interfaces were most effective in improving data accuracy as well as creating opportunities for peer-learning, particularly where the species identification task was more difficult. This study has significant implications for extending the practice of citizen science across formal and informal learning environments and reaching out to a variety of users.