Copy the page URI to the clipboard
Rosewell, J. P.
(2011).
URL: http://caaconference.co.uk/
Abstract
Multiple choice questions (MCQ) are the basic fare of e-assessment. MCQs are robust and easy to implement, but are pedagogically not ideal; open questions are preferable but automated marking of free text answers is problematic (although see Butcher & Jordan 2010).
A possible squaring of this circle is to appropriate the technique of confidence-based marking (CBM). In CBM, a student both selects an answer and also their level of confidence: they score full marks for knowing that they know the correct answer, some credit for a tentative correct answer but are penalised if they believe they know the answer but get it wrong (Gardner-Medwin 1995, 2006). There are several motivations for CBM: it rewards care and effort so engendering greater engagement, it encourages reflective learning (Gardner-Medwin & Curtin 2007; Nix & Wyllie 2011).
This project will take CBM and, with one simple change, enrol it for a different end. Here the MCQ is presented in two stages. Initially, the question is presented with no answer options visible; instead the student must set their confidence level that they know the answer. Only then are the possible answers are revealed and the student answers as a normal MCQ. The marking scheme follows standard CBM practice. Mechanically the question remains a simple MCQ: answer matching is trivial and robust, questions are easy to implement, and existing question banks can be reused. However, to the student, the question is effectively transformed from closed MCQ to an open question. They need to formulate an answer first before they can decide their confidence in their answer, so they must decide their answer in the absence of any positive or negative clues, reducing the chance of misconceptions, or working backwards.
The project will trial certainty-first CBM questions in an Open University distance learning course under a controlled experimental design to probe whether students using CBM will engage better with questions, improve their learning, and become more reflective learners (Nicol 2007). Measures of assignment scores and time on task will be collected, together with a survey and/or interview to probe attitudinal aspects.
References
Butcher, PG & Jordan, SE (2010). A comparison of human and computer marking of short free-text student responses. Computers and Education, 55, 489-499.
Gardner-Medwin, AR (1995). Confidence assessment in the teaching of basic science. Association for Learning Technology Journal, 3, 80-85.
Gardner-Medwin, AR (2006). Confidence-based marking - towards deeper learning and better exams. Chapter 12 in C. Bryan, & K. Clegg (Eds.), Innovative assessment in higher education Routledge; London.
Gardner-Medwin, T & Curtin, N (2007). Certainty-based marking (CBM) for reflective learning and proper knowledge assessment. Online conference: REAP07: Assessment Design for Learner Responsibility, Re-Engineering Assessment Practices in Higher Education: Univs. of Strathclyde, Glasgow, Glasgow Caledonian. [Online: http://www.ucl.ac.uk/lapt/REAP_cbm.pdf]
Nicol, D. (2007). E-assessment by design: Using multiple-choice tests to good effect. Journal of further and Higher Education, 31, 53-64.
Nix, I & Wyllie, A (2011). Exploring design features to enhance computer-based assessment: Learners' views on using a confidence indicator tool and computer-based feedback. British Journal of Educational Technology, 42, 101-112.