Copy the page URI to the clipboard
Wakeman, Christopher Edward
(2002).
DOI: https://doi.org/10.21954/ou.ro.00004995
Abstract
This case study investigates the nature of assessment, its validity and its relationship with learning on the Business Technician Education Council (BTEC) National Certificate (NC) courses in engineering principles. The study took place within the No. 1 School of Technical Training at Royal Air Force (RAF) Cosford, which is an accredited centre for BTEC (NC) and Higher National Certificate (HNC) awards in engineering based subjects. The No. 1 School of Technical Training has approximately 2000 students working towards BTEC awards at any one time.
The principal aims of the study were to identify the essential characteristics of the assessment strategy for BTEC (NC) courses at RAF Cosford and to evaluate its validity. There was a particular emphasis on the appropriateness of the assessment procedures identified, and the implications for fair and impartial assessment, particularly with regard to learners with different individual learning styles.
Methodologies used to investigate the specific questions within this research transcend the theoretical divide between positivist and interpretative ideologies, giving the study an eclectic character that combines quantitative and qualitative techniques. Specific procedures employed during the study include documentary analysis and semi-formal interviews during the early part of the enquiry, followed by statistical mapping and probability tests as the work progressed. Honey and Mumford’s (1986) learning styles questionnaire was used to identify individual learning styles in a sample of students, and this was supplemented by a new scoring system that strategically placed each learner on a polar graph depending on the individual learning style identified.
The findings from the study revealed a complex system of assessment for each BTEC unit investigated and this has been described as ‘dendritic’ in nature due to its tree-like structure when formed into a flow diagram. Concerning validity, there are a number of recommendations for improvement. My conclusion in this area is that although validity was at acceptable levels in certain respects, poor question design, inadequate criteria and lack of an experiential approach reduce validity overall. The most significant issue regarding validity probably relates to whether different learners are in fact being assessed on the same cognitive domains during summative assessment, as there is strong and compelling evidence within this study that certain learners are disadvantaged by particular types of assessment due to their individual learning style. This ‘potentially damaging side effect’ has implications for reliability and validity, as individual learners may experience certain assessment procedures in different ways. This could alter the degree of difficulty experienced by learners with a particular learning style, and this problem may be augmented by the way that data is presented within a test item. There is further evidence that certain knowledge types may also be more ‘reactive’ than others. This evidential claim is based on research that mapped assessment performance by the different learning style groups to pre-defined knowledge types. Though the evidence is convincing, the relative limitations of the sampling must be taken into account in drawing these conclusions.