ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2109.10053
45
0
v1v2v3v4 (latest)

Towards a Fairness-Aware Scoring System for Algorithmic Decision-Making

21 September 2021
Yi Yang
Ying Nian Wu
Mei Li
Xiangyu Chang
    FaML
ArXiv (abs)PDFHTML
Abstract

Scoring systems, as simple classification models, have significant advantages in interpretability and transparency when making predictions. They facilitate humans' decision-making by allowing them to make a quick prediction by hand through adding and subtracting a few point scores and thus have been widely used in various fields such as medical diagnosis of Intensive Care Units. However, (un)fairness issues in these models have long been criticized, and the use of biased data in the construction of score systems heightens this concern. In this paper, we propose a general framework to create data-driven fairness-aware scoring systems. Our approach is first to develop a social welfare function that incorporates both efficiency and equity. Then, we translate the social welfare maximization problem in economics into the empirical risk minimization task of the machine learning community to derive a fairness-aware scoring system with the help of mixed integer programming. We show that the proposed framework provides practitioners or policymakers great flexibility to select their desired fairness requirements and also allows them to customize their own requirements by imposing various operational constraints. Experimental evidence on several real data sets verifies that the proposed scoring system can achieve the optimal welfare of stakeholders and balance the interpretability, fairness, and efficiency issues.

View on arXiv
Comments on this paper