ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.07826
56
5

Metric-Fair Classifier Derandomization

15 June 2022
Jimmy Wu
Yatong Chen
Yang Liu
    FaML
ArXivPDFHTML
Abstract

We study the problem of classifier derandomization in machine learning: given a stochastic binary classifier f:X→[0,1]f: X \to [0,1]f:X→[0,1], sample a deterministic classifier f^:X→{0,1}\hat{f}: X \to \{0,1\}f^​:X→{0,1} that approximates the output of fff in aggregate over any data distribution. Recent work revealed how to efficiently derandomize a stochastic classifier with strong output approximation guarantees, but at the cost of individual fairness -- that is, if fff treated similar inputs similarly, f^\hat{f}f^​ did not. In this paper, we initiate a systematic study of classifier derandomization with metric fairness guarantees. We show that the prior derandomization approach is almost maximally metric-unfair, and that a simple ``random threshold'' derandomization achieves optimal fairness preservation but with weaker output approximation. We then devise a derandomization procedure that provides an appealing tradeoff between these two: if fff is α\alphaα-metric fair according to a metric ddd with a locality-sensitive hash (LSH) family, then our derandomized f^\hat{f}f^​ is, with high probability, O(α)O(\alpha)O(α)-metric fair and a close approximation of fff. We also prove generic results applicable to all (fair and unfair) classifier derandomization procedures, including a bias-variance decomposition and reductions between various notions of metric fairness.

View on arXiv
Comments on this paper