Reflective writing analytics - Validating machine learning approaches

Ullmann, Thomas (2019). Reflective writing analytics - Validating machine learning approaches. In: CALRG Annual Conference 2019, 17-18 Jun 2019, The Open University, Milton Keynes.



Reflective writing is an important educational practice to train reflective thinking. Currently, researchers must manually analyze these writings, limiting practice and research because the analysis is time and resource consuming. This presentation based on Ullmann (2019) shows results from an empirical study evaluating whether machine learning can be used to automate the manual analysis of reflection in writings. The study investigated eight categories that are often used in models to assess reflective writing, and the evaluation is based on 76 student essays (5,080 sentences) that are largely from third-and second-year health, business, and engineering students. To test the automated analysis of reflection in writings, machine learning models were built based on a random sample of 80% of the sentences. These models were thentested on the remaining 20% of the sentences. Overall, the standardized evaluation shows that five out of eight categories can be detected automatically with substantial or almost perfect reliability, while the other three categories can be detected with moderate reliability (Cohen's κ ranges between .53 and .85). The accuracies of the automated analysis were on average 10% lower than the accuracies of the manual analysis. These findings enable reflection analytics that is immediate and scalable.

Ullmann, T. D. (2019). Automated Analysis of Reflection in Writing: Validating Machine Learning Approaches. International Journal of Artificial Intelligence in Education, 29(2), 217–257.

Viewing alternatives

No digital document available to download for this item

Item Actions