ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1902.02979
18
5

Fair Decisions Despite Imperfect Predictions

8 February 2019
Niki Kilbertus
Manuel Gomez Rodriguez
Bernhard Schölkopf
Krikamol Muandet
Isabel Valera
    FaML
    OffRL
ArXivPDFHTML
Abstract

Consequential decisions are increasingly informed by sophisticated data-driven predictive models. However, to consistently learn accurate predictive models, one needs access to ground truth labels. Unfortunately, in practice, labels may only exist conditional on certain decisions---if a loan is denied, there is not even an option for the individual to pay back the loan. Hence, the observed data distribution depends on how decisions are being made. In this paper, we show that in this selective labels setting, learning a predictor directly only from available labeled data is suboptimal in terms of both fairness and utility. To avoid this undesirable behavior, we propose to directly learn decision policies that maximize utility under fairness constraints and thereby take into account how decisions affect which data is observed in the future. Our results suggest the need for a paradigm shift in the context of fair machine learning from the currently prevalent idea of simply building predictive models from a single static dataset via risk minimization, to a more interactive notion of "learning to decide". In particular, such policies should not entirely neglect part of the input space, drawing connections to explore/exploit tradeoffs in reinforcement learning, data missingness, and potential outcomes in causal inference. Experiments on synthetic and real-world data illustrate the favorable properties of learning to decide in terms of utility and fairness.

View on arXiv
Comments on this paper