149
49

Learning Fair Classifiers

Abstract

Automated data-driven decision systems are ubiquitous across a wide variety of online services, from online social networking and e-commerce to e-government. These systems rely on complex learning methods and vast amounts of data to optimize the service functionality, satisfaction of the end user and profitability. However, there is a growing concern that these automated decisions can lead, even in the absence of intent, to a lack of fairness, i.e., their outcomes have a disproportionally large adverse impact on particular groups of people sharing one or more sensitive attributes (e.g., race, sex). In this paper, we introduce a flexible mechanism to design fair classifiers in a principled manner, by leveraging a novel intuitive measure of decision boundary (un)fairness. We instantiate this mechanism on two well-known classifiers, logistic regression and support vector machines, and show on real-world data that our mechanism allows for a fine-grained control of the level of fairness, often at a minimal cost in terms of accuracy.

View on arXiv
Comments on this paper