Copy the page URI to the clipboard
Thomas, Pete; Smith, Neil and Waugh, Kevin
(2008).
DOI: https://doi.org/10.1080/17439880802324251
Abstract
To date there has been very little work on the machine understanding of imprecise diagrams, such as diagrams drawn by students in response to assessment questions. Imprecise diagrams exhibit faults such as missing, extraneous and incorrectly formed elements. The semantics of imprecise diagrams are difficult to determine. While there have been successful attempts at assessing text (essays) automatically, little success with diagrams has been reported. In this paper, we explain an approach to the automatic interpretation of graph-based diagrams based on a five-stage framework. The paper describes our approach to automatically grading graph-based diagrams and reports on some experiments into the automatic grading of student diagrams. The diagrams were produced under examination conditions and the output of the automatic marker was compared with the original human marks across a large number of diagrams. The experiments show good agreement between the performance of the automatic marker and the human markers. The paper also describes how the automatic marking algorithm has been incorporated into a variety of software teaching and learning tools. One tool supports the human grading of entity-relationship diagrams (ERDs). Another tool is for student use during the revision of ERDs. This tool automatically marks student answers in real-time and provides dynamically created feedback to help guide the student's progress.