ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.02225
  4. Cited By
Fair Risk Control: A Generalized Framework for Calibrating Multi-group
  Fairness Risks

Fair Risk Control: A Generalized Framework for Calibrating Multi-group Fairness Risks

3 May 2024
Lujing Zhang
Aaron Roth
Linjun Zhang
    FaML
ArXivPDFHTML

Papers citing "Fair Risk Control: A Generalized Framework for Calibrating Multi-group Fairness Risks"

9 / 9 papers shown
Title
Intersectional Divergence: Measuring Fairness in Regression
Intersectional Divergence: Measuring Fairness in Regression
Joe Germino
Nuno Moniz
Nitesh V. Chawla
FaML
68
0
0
01 May 2025
FairFML: Fair Federated Machine Learning with a Case Study on Reducing
  Gender Disparities in Cardiac Arrest Outcome Prediction
FairFML: Fair Federated Machine Learning with a Case Study on Reducing Gender Disparities in Cardiac Arrest Outcome Prediction
Siqi Li
Qiming Wu
Xin Li
Di Miao
Chuan Hong
...
Michael Hao Chen
Mengying Yan
Yilin Ning
M. Ong
Nan Liu
33
1
0
07 Oct 2024
Automatically Adaptive Conformal Risk Control
Automatically Adaptive Conformal Risk Control
Vincent Blot
Anastasios Nikolas Angelopoulos
Michael I Jordan
Nicolas Brunel
AI4CE
44
2
0
25 Jun 2024
When is Multicalibration Post-Processing Necessary?
When is Multicalibration Post-Processing Necessary?
Dutch Hansen
Siddartha Devic
Preetum Nakkiran
Vatsal Sharan
43
4
0
10 Jun 2024
Localized Adaptive Risk Control
Localized Adaptive Risk Control
Matteo Zecchin
Osvaldo Simeone
40
7
0
13 May 2024
An Algorithmic Framework for Bias Bounties
An Algorithmic Framework for Bias Bounties
Ira Globus-Harris
Michael Kearns
Aaron Roth
FedML
102
24
0
25 Jan 2022
Simple and near-optimal algorithms for hidden stratification and
  multi-group learning
Simple and near-optimal algorithms for hidden stratification and multi-group learning
Abdoreza Asadpour
Daniel J. Hsu
105
20
0
22 Dec 2021
Fair prediction with disparate impact: A study of bias in recidivism
  prediction instruments
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
Alexandra Chouldechova
FaML
207
2,090
0
24 Oct 2016
U-Net: Convolutional Networks for Biomedical Image Segmentation
U-Net: Convolutional Networks for Biomedical Image Segmentation
Olaf Ronneberger
Philipp Fischer
Thomas Brox
SSeg
3DV
345
75,888
0
18 May 2015
1