Learning Analytics and Fairness: Do Existing Algorithms Serve Everyone Equally?

Bayer, Vaclav; Hlosta, Martin and Fernandez, Miriam (2021). Learning Analytics and Fairness: Do Existing Algorithms Serve Everyone Equally? In: AIED 2021; 22nd International Conference on Artificial Intelligence in Education, 14-18 Jun 2021, ONLINE from Utrecht.

URL: https://aied2021.science.uu.nl/

Abstract

Systemic inequalities still exist within Higher Education (HE). Reports from Universities UK show a 13% degree-awarding gap for Black, Asian and Minority Ethnic (BAME) students, with similar effects found when comparing students across other protected attributes, such as gender or disability. In this paper, we study whether existing prediction models to identify students at risk of failing (and hence providing early and adequate support to students) do work equally effectively for the majority vs minority groups. We also investigate whether disaggregating of data by protected attributes and building individual prediction models for each subgroup (e.g., a specific prediction model for females vs the one for males) could enhance model fairness. Our results, conducted over 35 067 students and evaluated over 32,538 students, show that existing prediction models do indeed seem to favour the majority group. As opposed to hypothesise, creating individual models does not help improving accuracy or fairness.

Viewing alternatives

Item Actions

Export

About

Recommendations