What does it all mean? Tutor and student experiences of marking grids for assessment on a final year module

Duckworth, Jenny and Kopinska, Harriet (2022). What does it all mean? Tutor and student experiences of marking grids for assessment on a final year module. In: Horizons in STEM Higher Education Conference 2022: Making Connections, Innovating and Sharing Pedagogy, 29-30 Jun 2022, University College London.


Effective assessment drives learning and is essential to student success (Gibbs, 2006). There is a need for assessment criteria to be clear so that they can be applied consistently by markers and interpreted by students. Whilst the provision of feedback (e.g Hattie and Clarke, 2018) and grading of assessment (e.g. Bloxham et. al. 2016) are relatively well studied, there have been few comparative studies on tutor experience of applying assessment criteria to written assignments and student experience of interpreting them.

Here we focus on the Open University final year interdisciplinary module ‘Environment: responding to change’, which uses criterion-based marking according to learning outcomes (LOs). Tutors provide assessment feedback using rubrics in the form of marking grids containing a detailed breakdown of the criteria relevant for each LO. The grids are designed to facilitate application of LO grading scales and to enable parity between tutor mark allocations, whilst giving constructive feedback to students around LOs. However, tutors have reported challenges in applying the criteria consistently, whilst student perception of the grids is unknown, hence our research question: “How do students and tutors use the marking grids on this module and what is their experience of this approach?”

We used a mixed methods approach to collect quantitative and qualitative data on how tutors use the marking grids to apply criteria and award marks, and how students interpret the grids and apply them to their learning. Online surveys involving Likert scale and free text questions were completed by students from the current module cohort and tutors supporting them. These were followed up by more detailed interviews with a subset of students and tutors, with thematic analysis undertaken on the transcripts.

In this presentation we will outline our approach, report on the preliminary findings from our data analysis and discuss their implications. This analysis will give an insight into student and tutor perspectives on the use and interpretation of marking grids, which will enable the approach to be tailored for the module to give greater clarity and ease of use and interpretation.

The study concerns an interdisciplinary module that sits at the interface of science and policy. Such interdisciplinary modules present a unique assessment challenge as they represent the coming together of different disciplinary cultures (Jessop & Maleckar, 2016). Thus, in addition to informing future assessment on the module, our findings will have wider applicability to modules using criterion based marking in many STEM disciplines.

Bloxham, S., den-Outer, B., Hudson, J. and Price, M. (2016) ‘Let’s stop the pretence of consistent marking: exploring the multiple limitations of assessment criteria’, Assessment & Evaluation in
Higher Education, 41(3), pp. 466-481. doi: 10.1080/02602938.2015.1024607.

Gibbs, G. (2006) ‘How assessment frames student learning’ pp. 23-27 in Bryan, C. and Clegg, K. (eds) Innovative Assessment in Higher Education, London, Routledge.

Hattie, J. & Clarke, S. (2018) ‘Visible Learning: Feedback’ London, Routledge.

Jessop, T. and Maleckar, B. (2016) ‘The influence of disciplinary assessment patterns on student learning: a comparative study’, Studies in Higher Education 41(4), pp. 696-711. doi:

Viewing alternatives

No digital document available to download for this item

Item Actions