Fair for a few: Improving Fairness in Doubly Imbalanced Datasets
- FaML

Fairness has been identified as an important aspect of Machine Learning and Artificial Intelligence solutions for decision making. Recent literature offers a variety of approaches for debiasing, however many of them fall short when the data collection is imbalanced. In this paper, we focus on a particular case, fairness in doubly imbalanced datasets, such that the data collection is imbalanced both for the label and the groups in the sensitive attribute. Firstly, we present an exploratory analysis to illustrate limitations in debiasing on a doubly imbalanced dataset. Then, a multi-criteria based solution is proposed for finding the most suitable sampling and distribution for label and sensitive attribute, in terms of fairness and classification accuracy
View on arXiv@article{yalcin2025_2506.14306, title={ Fair for a few: Improving Fairness in Doubly Imbalanced Datasets }, author={ Ata Yalcin and Asli Umay Ozturk and Yigit Sever and Viktoria Pauw and Stephan Hachinger and Ismail Hakki Toroslu and Pinar Karagoz }, journal={arXiv preprint arXiv:2506.14306}, year={ 2025 } }