The evaluation of electronic marking of examinations

Thomas, Pete (2003). The evaluation of electronic marking of examinations. In: 8th Annual International Conference on Innovation and Technology in Computer Science Education, Jul 2003, Thesaloniki, Greece.

URL: http://delivery.acm.org/10.1145/970000/961528/p50-...

Abstract

This paper discusses an approach to the electronic (automatic) marking of examination papers, in particular, the extent to which it is possible to mark a candidate’s answers automatically and return, within a very short period of time, a result that would be comparable with a manually produced score. The investigation showed that there are good reasons for manual intervention in a predominantly automatic process. The paper discusses the results of tests of the automatic marking process that in two experiments yielded grades for examination scripts that are comparable with human markers (although the automatic grade tends to be the lower of the two). An analysis of the correlations between the human and automatic markers shows highly significant relationships between the human markers (between 0.91 and 0.95) and a significant relationship between the average human marker score and the electronic score (0.86).

Viewing alternatives

No digital document available to download for this item

Item Actions

Export

About