Copy the page URI to the clipboard
Jordan, Sally Elizabeth
(2014).
DOI: https://doi.org/10.21954/ou.ro.0000a09b
Abstract
This submission draws on research from twelve publications, all addressing some aspect of the broad research question: “Can interactive computer-marked assessment improve the effectiveness of assessment for learning?”
The work starts from a consideration of the conditions under which assessment of any sort is predicted to best support learning, and reviews the broader literature of assessment and feedback before considering the potential of computer-based assessment, focusing on relatively sophisticated constructed-response questions, and on the impact of instantaneous, tailored and increasing feedback. A range of qualitative and quantitative research methodologies are used to investigate factors which influence the engagement of distance learners of science with computer-marked assessment and computer-generated feedback.
It is concluded that the strongest influence on engagement is the student’s understanding of what they are required to do, including their understanding of the wording of assessment tasks and feedback. Clarity of wording is thus important, as is an iterative design process that allows for improvements to be made. Factors such as cut-off dates can have considerable impact, pointing to the importance of good overall assessment design, and more generally to the power and responsibility that lie in the hands of remote developers of online assessment and teaching.
Four of the publications describe research into the marking accuracy and effectiveness of questions to which students give their answer as a short phrase or sentence. Relatively simple pattern-matching software has been shown to give marking accuracy at least as good as that of human markers and more sophisticated computer-marked systems, provided questions are developed on the basis of responses from students at a similar level. However, educators continue to use selected-response questions in preference to constructed-response questions, despite concerns over the validity and authenticity of selected-response questions. Factors contributing to the low take-up of more sophisticated computer-marked tasks are discussed.
E-assessment also has the potential to improve the learning experience indirectly by providing information to educators about student engagement and student errors, at either the cohort or individual student level. The effectiveness of these “assessment analytics” is also considered, concluding that they have the potential to provide deep general insight and an early warning of at-risk students.