Exploring the Profile of University Assessments Flagged as Containing AI-Generated Material

Gooch, Daniel; Waugh, Kevin; Richards, Mike; Slaymaker, Mark and Woodthorpe, John (2024). Exploring the Profile of University Assessments Flagged as Containing AI-Generated Material. ACM Inroads, 15(2) pp. 39–47.

DOI: https://doi.org/10.1145/3656478

Abstract

Large language models (LLMs) allow students to generate superficially compelling solutions to assessment questions. Despite many flaws, LLM detection tools provide us with some understanding of the scale of the issue in university assessments. Using the TurnItIn AI detection tool, we present an analysis of 10,725 student assessments submitted in two cohorts during the summers of 2022 and 2023. We observe an increase in the number of scripts flagged as containing AI-generated material. We also present an analysis of the demographic profile of flagged scripts, finding that male students, students with lower prior educational attainment, and younger students are more likely to be flagged.

Viewing alternatives

Download history

Metrics

Public Attention

Altmetrics from Altmetric

Number of Citations

Citations from Dimensions

Item Actions

Export

About