The Open UniversitySkip to content

The evaluation of electronic marking of examinations

Thomas, Pete (2003). The evaluation of electronic marking of examinations. In: 8th Annual International Conference on Innovation and Technology in Computer Science Education, Jul 2003, Thesaloniki, Greece.

Google Scholar: Look up in Google Scholar


This paper discusses an approach to the electronic (automatic) marking of examination papers, in particular, the extent to which it is possible to mark a candidate’s answers automatically and return, within a very short period of time, a result that would be comparable with a manually produced score. The investigation showed that there are good reasons for manual intervention in a predominantly automatic process. The paper discusses the results of tests of the automatic marking process that in two experiments yielded grades for examination scripts that are comparable with human markers (although the automatic grade tends to be the lower of the two). An analysis of the correlations between the human and automatic markers shows highly significant relationships between the human markers (between 0.91 and 0.95) and a significant relationship between the average human marker score and the electronic score (0.86).

Item Type: Conference or Workshop Item
ISSN: 0097-8418
Keywords: electronic examinations; automatic marking
Academic Unit/School: Faculty of Science, Technology, Engineering and Mathematics (STEM) > Computing and Communications
Faculty of Science, Technology, Engineering and Mathematics (STEM)
Research Group: Centre for Research in Computing (CRC)
Item ID: 2757
Depositing User: Pete Thomas
Date Deposited: 15 Aug 2006
Last Modified: 02 May 2018 12:33
Share this page:

Actions (login may be required)

Policies | Disclaimer

© The Open University   contact the OU