Copy the page URI to the clipboard
Richards, Mike; Waugh, Kevin; Slaymaker, Mark; Petre, Marian; Woodthorpe, John and Gooch, Daniel
(2023).
DOI: https://doi.org/10.1145/3633287
Abstract
Cheating has been a problem long standing issue in University assessments. However, the rise of ChatGPT and other free-to-use generative AI tools have democratised cheating. Students can run any assessment questions through the tool, and generate a superficially compelling solution, which may or may not be accurate. We ran a blinded “quality assurance” marking exercise, providing ChatGPT-generated “synthetic” scripts alongside student scripts to volunteer markers. 4 end-of-module assessments from across a University CS curriculum were anonymously marked. A total of 90 scripts were marked, and barring two outliers, every undergraduate script received at least a passing grade. We also present the results of running our sample scripts through diverse quality assurance software, and the results of interviewing the markers. As such, we contribute a baseline understanding of how the public release of generative AI is potentially going to significantly impact quality assurance processes as our analysis demonstrates that, in most cases, across a range of question formats, topics, and study levels, ChatGPT is at least capable of producing adequate solutions.
Viewing alternatives
Download history
Metrics
Public Attention
Altmetrics from AltmetricNumber of Citations
Citations from DimensionsItem Actions
Export
About
- Item ORO ID
- 89325
- Item Type
- Journal Item
- Keywords
- ChatGPT; generative AI; cheating; quality assurance; University assessment
- Academic Unit or School
-
Faculty of Science, Technology, Engineering and Mathematics (STEM) > Computing and Communications
Faculty of Science, Technology, Engineering and Mathematics (STEM) - Copyright Holders
- © 2023 The Authors
- Depositing User
- Daniel Gooch