2
0

Enforcing Fairness Where It Matters: An Approach Based on Difference-of-Convex Constraints

Abstract

Fairness in machine learning has become a critical concern, particularly in high-stakes applications. Existing approaches often focus on achieving full fairness across all score ranges generated by predictive models, ensuring fairness in both high and low-scoring populations. However, this stringent requirement can compromise predictive performance and may not align with the practical fairness concerns of stakeholders. In this work, we propose a novel framework for building partially fair machine learning models, which enforce fairness within a specific score range of interest, such as the middle range where decisions are most contested, while maintaining flexibility in other regions. We introduce two statistical metrics to rigorously evaluate partial fairness within a given score range, such as the top 20%-40% of scores. To achieve partial fairness, we propose an in-processing method by formulating the model training problem as constrained optimization with difference-of-convex constraints, which can be solved by an inexact difference-of-convex algorithm (IDCA). We provide the complexity analysis of IDCA for finding a nearly KKT point. Through numerical experiments on real-world datasets, we demonstrate that our framework achieves high predictive performance while enforcing partial fairness where it matters most.

View on arXiv
@article{he2025_2505.12530,
  title={ Enforcing Fairness Where It Matters: An Approach Based on Difference-of-Convex Constraints },
  author={ Yutian He and Yankun Huang and Yao Yao and Qihang Lin },
  journal={arXiv preprint arXiv:2505.12530},
  year={ 2025 }
}
Comments on this paper