464

Fairer Machine Learning Software on Multiple Sensitive Attributes With Data Preprocessing

IEEE Transactions on Software Engineering (TSE), 2021
Main:12 Pages
2 Figures
Bibliography:3 Pages
21 Tables
Abstract

This research seeks to benefit the software engineering society by providing a simple yet effective approach to improve fairness of machine learning software on data with multiple sensitive attributes. Machine learning fairness has attracted increasing attention since machine learning software is increasingly used for high-stakes and high-risk decisions. Amongst all the fairness notations, this work specifically targets "equalized odds". Equalized odds requires that members of every demographic group do not receive disparate mistreatment. It is one of the most widely accepted fairness notations given its advantage in always allowing perfect classifiers. Most existing solutions for machine learning fairness do not directly target equalized odds and only affect one sensitive attribute (e.g. sex) at a time. To overcome this shortage, we analyzed the condition of equalized odds and hypothesize that balancing the class distribution of training data across every demographic group will improve equalized odds of the learned model. On four real-world datasets (two of which have multiple sensitive attributes) and three synthetic datasets, our empirical results show that, at low computational overhead, the proposed preprocessing algorithm FairBalance can significantly improve equalized odds without much, if any damage to the prediction performance. FairBalance also outperforms existing state-of-the-art approaches in terms of equalized odds. To facilitate reuse, reproduction, and validation of this work, our scripts and data are available at https://github.com/hil-se/FairBalance under an open-source Apache license (v2.0).

View on arXiv
Comments on this paper