ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2207.03277
  4. Cited By
A Comprehensive Empirical Study of Bias Mitigation Methods for Machine
  Learning Classifiers

A Comprehensive Empirical Study of Bias Mitigation Methods for Machine Learning Classifiers

7 July 2022
Zhenpeng Chen
Jie M. Zhang
Federica Sarro
Mark Harman
    FaML
ArXivPDFHTML

Papers citing "A Comprehensive Empirical Study of Bias Mitigation Methods for Machine Learning Classifiers"

7 / 7 papers shown
Title
Whence Is A Model Fair? Fixing Fairness Bugs via Propensity Score Matching
Whence Is A Model Fair? Fixing Fairness Bugs via Propensity Score Matching
Kewen Peng
Yicheng Yang
Hao Zhuo
32
0
0
23 Apr 2025
Understanding trade-offs in classifier bias with quality-diversity optimization: an application to talent management
Understanding trade-offs in classifier bias with quality-diversity optimization: an application to talent management
Catalina M Jaramillo
Paul Squires
Julian Togelius
78
2
0
25 Nov 2024
Bias Testing and Mitigation in LLM-based Code Generation
Bias Testing and Mitigation in LLM-based Code Generation
Dong Huang
Qingwen Bu
Jie M. Zhang
Xiaofei Xie
Junjie Chen
Heming Cui
48
20
0
03 Sep 2023
M$^3$Fair: Mitigating Bias in Healthcare Data through Multi-Level and
  Multi-Sensitive-Attribute Reweighting Method
M3^33Fair: Mitigating Bias in Healthcare Data through Multi-Level and Multi-Sensitive-Attribute Reweighting Method
Yinghao Zhu
Jingkun An
Enshen Zhou
Lu An
Junyi Gao
...
Haoran Feng
Bo-Ru Hou
Wen Tang
Cheng Pan
Liantao Ma
9
5
0
07 Jun 2023
Causality-Aided Trade-off Analysis for Machine Learning Fairness
Causality-Aided Trade-off Analysis for Machine Learning Fairness
Zhenlan Ji
Pingchuan Ma
Shuai Wang
Yanhui Li
FaML
34
7
0
22 May 2023
A Survey on Bias and Fairness in Machine Learning
A Survey on Bias and Fairness in Machine Learning
Ninareh Mehrabi
Fred Morstatter
N. Saxena
Kristina Lerman
Aram Galstyan
SyDa
FaML
326
4,212
0
23 Aug 2019
Fair prediction with disparate impact: A study of bias in recidivism
  prediction instruments
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
Alexandra Chouldechova
FaML
207
2,087
0
24 Oct 2016
1