Measuring improvement in latent semantic analysis-based marking systems: using a computer to mark questions about HTML

Haley, Debra; Thomas, Pete; De Roeck, Anne and Petre, Marian (2007). Measuring improvement in latent semantic analysis-based marking systems: using a computer to mark questions about HTML. In: ACM 9th International Australasian Computing Education Conference, 30 Jan - 02 Feb 2007, Ballarat, Victoria, Australia.

URL: http://portal.acm.org/citation.cfm?id=1273677

Abstract

This paper proposes two unconventional metrics as animportant tool for assessment research: the Manhattan(L1) and the Euclidean (L2) distance measures. We used them to evaluate the results of a Latent Semantic Analysis(LSA) system to assess short answers to two questionsabout HTML in an introductory computer science class.This is the only study, as far as we know, that addresses the question of how well an LSA-based system can evaluate answers in the very specific and technical language of HTML. We found that, although there are several ways to measure automatic assessment results in the literature, they were not useful for our purposes. We want to compare the marks given by LSA to marks awarded by a human tutor. We demonstrate how L1 and L2 quantify the results of varying the amount of training data necessary to enable LSA to mark the answers to two HTML questions. Although this paper describes the use of the metrics in one particular case, it has more general applicability. Much fine-tuning of an LSA marking system is required for good results. A researcher needs an easy way to evaluate the results of various modifications to the system. The Manhattan and the Euclidean distance measures provide this functionality.

Viewing alternatives

Item Actions

Export

About

Recommendations