Comparing automatically detected reflective texts with human judgements

Ullmann, Thomas Daniel; Wild, Fridolin and Scott, Peter (2012). Comparing automatically detected reflective texts with human judgements. In: 2nd Workshop on Awareness and Reflection in Technology-Enhanced Learning, 18 Sep 2013, Saarbrucken, Germany, pp. 101–116.

URL: http://ceur-ws.org/Vol-931/paper8.pdf

Abstract

This paper reports on the descriptive results of an experiment comparing automatically detected reflective and not-reflective texts against human judgements. Based on the theory of reflective writing assessment and their operationalisation five elements of reflection were defined. For each element of reflection a set of indicators was developed, which automatically annotate texts regarding reflection based on the parameterisation with authoritative texts. Using a large blog corpus 149 texts were retrieved, which were either annotated as reflective or notreflective. An online survey was then used to gather human judgements for these texts. These two data sets were used to compare the quality of the reflection detection algorithm with human judgments. The analysis indicates the expected difference between reflective and not reflective texts.

Viewing alternatives

Download history

Item Actions

Export

About

Recommendations