The Open UniversitySkip to content

E-assessment for learning? Exploring the potential of computer-marked assessment and computer-generated feedback, from short-answer questions to assessment analytics.

Jordan, Sally Elizabeth (2014). E-assessment for learning? Exploring the potential of computer-marked assessment and computer-generated feedback, from short-answer questions to assessment analytics. PhD thesis The Open University.

Full text available as:
PDF (Version of Record) - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader
Download (1MB) | Preview
Google Scholar: Look up in Google Scholar


This submission draws on research from twelve publications, all addressing some aspect of the broad research question: “Can interactive computer-marked assessment improve the effectiveness of assessment for learning?”

The work starts from a consideration of the conditions under which assessment of any sort is predicted to best support learning, and reviews the broader literature of assessment and feedback before considering the potential of computer-based assessment, focusing on relatively sophisticated constructed-response questions, and on the impact of instantaneous, tailored and increasing feedback. A range of qualitative and quantitative research methodologies are used to investigate factors which influence the engagement of distance learners of science with computer-marked assessment and computer-generated feedback.

It is concluded that the strongest influence on engagement is the student’s understanding of what they are required to do, including their understanding of the wording of assessment tasks and feedback. Clarity of wording is thus important, as is an iterative design process that allows for improvements to be made. Factors such as cut-off dates can have considerable impact, pointing to the importance of good overall assessment design, and more generally to the power and responsibility that lie in the hands of remote developers of online assessment and teaching.

Four of the publications describe research into the marking accuracy and effectiveness of questions to which students give their answer as a short phrase or sentence. Relatively simple pattern-matching software has been shown to give marking accuracy at least as good as that of human markers and more sophisticated computer-marked systems, provided questions are developed on the basis of responses from students at a similar level. However, educators continue to use selected-response questions in preference to constructed-response questions, despite concerns over the validity and authenticity of selected-response questions. Factors contributing to the low take-up of more sophisticated computer-marked tasks are discussed.

E-assessment also has the potential to improve the learning experience indirectly by providing information to educators about student engagement and student errors, at either the cohort or individual student level. The effectiveness of these “assessment analytics” is also considered, concluding that they have the potential to provide deep general insight and an early warning of at-risk students.

Item Type: Thesis (PhD)
Copyright Holders: 2014 Sally Jordan
Extra Information: Due to copyright reasons, only part 1 is publicly available.
Keywords: educational tests and measurements; communication in education; grading and marking
Academic Unit/School: Faculty of Science, Technology, Engineering and Mathematics (STEM) > Physical Sciences
Research Group: eSTEeM
Item ID: 41115
Depositing User: Sally Jordan
Date Deposited: 14 Oct 2014 08:34
Last Modified: 03 Jul 2019 01:38
Share this page:

Download history for this item

These details should be considered as only a guide to the number of downloads performed manually. Algorithmic methods have been applied in an attempt to remove automated downloads from the displayed statistics but no guarantee can be made as to the accuracy of the figures.

Actions (login may be required)

Policies | Disclaimer

© The Open University   contact the OU