Copy the page URI to the clipboard
Gooch, Daniel; Waugh, Kevin; Richards, Mike; Slaymaker, Mark and Woodthorpe, John
(2024).
DOI: https://doi.org/10.1145/3656478
Abstract
Large language models (LLMs) allow students to generate superficially compelling solutions to assessment questions. Despite many flaws, LLM detection tools provide us with some understanding of the scale of the issue in university assessments. Using the TurnItIn AI detection tool, we present an analysis of 10,725 student assessments submitted in two cohorts during the summers of 2022 and 2023. We observe an increase in the number of scripts flagged as containing AI-generated material. We also present an analysis of the demographic profile of flagged scripts, finding that male students, students with lower prior educational attainment, and younger students are more likely to be flagged.