Learning Individually Fair Classifier with Path-Specific Causal-Effect
Constraint
- FaML
Machine learning is increasingly used to make decisions for individuals in various fields, which require to achieve good prediction accuracy while ensuring fairness with respect to such sensitive features as race or gender. This problem, however, remains difficult in complex real-world scenarios. To effectively quantify unfairness in such scenarios, existing methods utilize {\it path-specific causal effects}. However, none of them can ensure fairness for each individual without making impractical assumptions. Specifically, these assumptions require us to formulate the true data-generating processes as the {\it causal model}, which requires an extremely deep understanding of data and is unrealistic in practice. In this paper, we propose a framework for learning an individually fair classifier without relying on the causal model. For this goal, we define the {\it probability of individual unfairness} (PIU) and solve an optimization problem that constrains its upper bound, which can be estimated from data without the causal model. We elucidate why this constraint can guarantee fairness for each individual. Experimental results demonstrate that our method learns an individually fair classifier at a slight cost of prediction accuracy.
View on arXiv