Copy the page URI to the clipboard
Zhao, Xuan; Fabbrizzi, Simone; Reyero Lobo, Paula; Ghodsi, Siamak; Broelemann, Klaus; Staab, Steffen and Kasneci, Gjergji
(2024).
DOI: https://doi.org/10.1109/bigdata62323.2024.10825191
Abstract
To address bias issues, fair machine learning usually jointly optimizes two (or more) metrics aiming at predictive utility and fairness. However, the inherent under-representation of minorities in the data often makes the disparate impact of subpopulations less noticeable and difficult to deal with during learning. In this paper, we propose a novel adversarial reweighting method to address such disparate impact. To balance the data distribution between the majority and the minority groups, our approach prefers samples from the majority group that are closer to the minority group as evaluated by the Wasserstein distance. Theoretical analysis shows the effectiveness of our adversarial reweighting approach. Experiments demonstrate that our approach mitigates disparate impact without sacrificing classification accuracy, outperforming related state-of-the-art methods on image and tabular benchmark datasets