Learning Fair Classifiers in Online Stochastic Settings
- FaML
In many real life situations, including job and loan applications, gatekeepers must make justified, real time decisions about a person's fitness for a particular opportunity using only a partial data set. People on both sides of such decisions have understandable concerns about their fairness, especially when they occur online or algorithmically. In this paper, we try to accomplish approximate group fairness in an online decision making process where examples are sampled i.i.d from an underlying distribution. The fairness metric we consider is equalized odds, which requires requires the decision making process to achieve approximately equalized false positive and false negative rates across demographic groups. Our work follows from the classical learning from experts scheme, extending theRandomized Multiplicative Weights algorithm by keeping separate weights for label classes as well as groups, where the probability of choosing each weights is optimized for both fairness and regret. Our theoretical results show that approximately equalized odds can be achieved without sacrificing much regret.We also demonstrate the performance of the algorithm on real data sets commonly used by the fairness community
View on arXiv