ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1808.08166
  4. Cited By
An Empirical Study of Rich Subgroup Fairness for Machine Learning

An Empirical Study of Rich Subgroup Fairness for Machine Learning

24 August 2018
Michael Kearns
Seth Neel
Aaron Roth
Zhiwei Steven Wu
    FaML
ArXivPDFHTML

Papers citing "An Empirical Study of Rich Subgroup Fairness for Machine Learning"

9 / 9 papers shown
Title
Fairness Practices in Industry: A Case Study in Machine Learning Teams Building Recommender Systems
Fairness Practices in Industry: A Case Study in Machine Learning Teams Building Recommender Systems
Jing Nathan Yan
Junxiong Wang
Jeffrey M. Rzeszotarski
Allison Koenecke
FaML
44
0
0
26 May 2025
On the Promise for Assurance of Differentiable Neurosymbolic Reasoning Paradigms
On the Promise for Assurance of Differentiable Neurosymbolic Reasoning Paradigms
Luke E. Richards
Jessie Yaros
Jasen Babcock
Coung Ly
Robin Cosbey
Timothy Doster
Cynthia Matuszek
NAI
92
0
0
13 Feb 2025
EARN Fairness: Explaining, Asking, Reviewing, and Negotiating Artificial Intelligence Fairness Metrics Among Stakeholders
EARN Fairness: Explaining, Asking, Reviewing, and Negotiating Artificial Intelligence Fairness Metrics Among Stakeholders
Lin Luo
Yuri Nakao
Mathieu Chollet
Hiroya Inakoshi
Simone Stumpf
53
1
0
16 Jul 2024
Diversity-aware clustering: Computational Complexity and Approximation Algorithms
Diversity-aware clustering: Computational Complexity and Approximation Algorithms
Suhas Thejaswi
Ameet Gadekar
Bruno Ordozgoiti
Aristides Gionis
45
2
0
10 Jan 2024
Bias Testing and Mitigation in LLM-based Code Generation
Bias Testing and Mitigation in LLM-based Code Generation
Dong Huang
Qingwen Bu
Jie M. Zhang
Xiaofei Xie
Junjie Chen
Heming Cui
70
23
0
03 Sep 2023
One-vs.-One Mitigation of Intersectional Bias: A General Method to Extend Fairness-Aware Binary Classification
One-vs.-One Mitigation of Intersectional Bias: A General Method to Extend Fairness-Aware Binary Classification
Kenji Kobayashi
Yuri Nakao
FaML
54
8
0
26 Oct 2020
A Reductions Approach to Fair Classification
A Reductions Approach to Fair Classification
Alekh Agarwal
A. Beygelzimer
Miroslav Dudík
John Langford
Hanna M. Wallach
FaML
133
1,094
0
06 Mar 2018
Fairness in Criminal Justice Risk Assessments: The State of the Art
Fairness in Criminal Justice Risk Assessments: The State of the Art
R. Berk
Hoda Heidari
S. Jabbari
Michael Kearns
Aaron Roth
44
990
0
27 Mar 2017
Inherent Trade-Offs in the Fair Determination of Risk Scores
Inherent Trade-Offs in the Fair Determination of Risk Scores
Jon M. Kleinberg
S. Mullainathan
Manish Raghavan
FaML
84
1,762
0
19 Sep 2016
1