The Open UniversitySkip to content
 

The evaluation of electronic marking of examinations

Thomas, Pete (2003). The evaluation of electronic marking of examinations. In: 8th Annual International Conference on Innovation and Technology in Computer Science Education, July 2003, Thesaloniki, Greece.

URL: http://delivery.acm.org/10.1145/970000/961528/p50-...
Google Scholar: Look up in Google Scholar

Abstract

This paper discusses an approach to the electronic (automatic) marking of examination papers, in particular, the extent to which it is possible to mark a candidate’s answers automatically and return, within a very short period of time, a result that would be comparable with a manually produced score. The investigation showed that there are good reasons for manual intervention in a predominantly automatic process. The paper discusses the results of tests of the automatic marking process that in two experiments yielded grades for examination scripts that are comparable with human markers (although the automatic grade tends to be the lower of the two). An analysis of the correlations between the human and automatic markers shows highly significant relationships between the human markers (between 0.91 and 0.95) and a significant relationship between the average human marker score and the electronic score (0.86).

Item Type: Conference Item
ISSN: 0097-8418
Keywords: electronic examinations; automatic marking
Academic Unit/Department: Mathematics, Computing and Technology > Computing & Communications
Interdisciplinary Research Centre: Centre for Research in Computing (CRC)
Item ID: 2757
Depositing User: Pete Thomas
Date Deposited: 15 Aug 2006
Last Modified: 02 Dec 2010 19:48
URI: http://oro.open.ac.uk/id/eprint/2757
Share this page:

Actions (login may be required)

View Item
Report issue / request change

Policies | Disclaimer

© The Open University   + 44 (0)870 333 4340   general-enquiries@open.ac.uk