Learning Analytics and Fairness: Do Existing Algorithms Serve Everyone Equally?

Bayer, Vaclav; Hlosta, Martin and Fernandez, Miriam (2021). Learning Analytics and Fairness: Do Existing Algorithms Serve Everyone Equally? In: Artificial Intelligence in Education. AIED 2021. Lecture Notes in Computer Science, vol 12749 (Roll, I.; McNamara, D.; Sosnovsky, S.; Luckin, R. and Dimitrova, V. eds.), Springer.

DOI: https://doi.org/10.1007/978-3-030-78270-2_12

URL: https://aied2021.science.uu.nl/


Systemic inequalities still exist within Higher Education (HE). Reports from Universities UK show a 13% degree-awarding gap for Black, Asian and Minority Ethnic (BAME) students, with similar effects found when comparing students across other protected attributes, such as gender or disability. In this paper, we study whether existing prediction models to identify students at risk of failing (and hence providing early and adequate support to students) do work equally effectively for the majority vs minority groups. We also investigate whether disaggregating of data by protected attributes and building individual prediction models for each subgroup (e.g., a specific prediction model for females vs the one for males) could enhance model fairness. Our results, conducted over 35 067 students and evaluated over 32,538 students, show that existing prediction models do indeed seem to favour the majority group. As opposed to hypothesise, creating individual models does not help improving accuracy or fairness.

Viewing alternatives

Download history


Public Attention

Altmetrics from Altmetric

Number of Citations

Citations from Dimensions

Item Actions