ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.10216
12
10

Fair for All: Best-effort Fairness Guarantees for Classification

18 December 2020
A. Krishnaswamy
Zhihao Jiang
Kangning Wang
Yu Cheng
Kamesh Munagala
    FaML
ArXivPDFHTML
Abstract

Standard approaches to group-based notions of fairness, such as \emph{parity} and \emph{equalized odds}, try to equalize absolute measures of performance across known groups (based on race, gender, etc.). Consequently, a group that is inherently harder to classify may hold back the performance on other groups; and no guarantees can be provided for unforeseen groups. Instead, we propose a fairness notion whose guarantee, on each group ggg in a class G\mathcal{G}G, is relative to the performance of the best classifier on ggg. We apply this notion to broad classes of groups, in particular, where (a) G\mathcal{G}G consists of all possible groups (subsets) in the data, and (b) G\mathcal{G}G is more streamlined. For the first setting, which is akin to groups being completely unknown, we devise the {\sc PF} (Proportional Fairness) classifier, which guarantees, on any possible group ggg, an accuracy that is proportional to that of the optimal classifier for ggg, scaled by the relative size of ggg in the data set. Due to including all possible groups, some of which could be too complex to be relevant, the worst-case theoretical guarantees here have to be proportionally weaker for smaller subsets. For the second setting, we devise the {\sc BeFair} (Best-effort Fair) framework which seeks an accuracy, on every g∈Gg \in \mathcal{G}g∈G, which approximates that of the optimal classifier on ggg, independent of the size of ggg. Aiming for such a guarantee results in a non-convex problem, and we design novel techniques to get around this difficulty when G\mathcal{G}G is the set of linear hypotheses. We test our algorithms on real-world data sets, and present interesting comparative insights on their performance.

View on arXiv
Comments on this paper