ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2212.02614
9
0

Can Ensembling Pre-processing Algorithms Lead to Better Machine Learning Fairness?

5 December 2022
Khaled Badran
Pierre-Olivier Coté
Amanda Kolopanis
Rached Bouchoucha
Antonio Collante
D. Costa
Emad Shihab
Foutse Khomh
    FaML
    FedML
ArXivPDFHTML
Abstract

As machine learning (ML) systems get adopted in more critical areas, it has become increasingly crucial to address the bias that could occur in these systems. Several fairness pre-processing algorithms are available to alleviate implicit biases during model training. These algorithms employ different concepts of fairness, often leading to conflicting strategies with consequential trade-offs between fairness and accuracy. In this work, we evaluate three popular fairness pre-processing algorithms and investigate the potential for combining all algorithms into a more robust pre-processing ensemble. We report on lessons learned that can help practitioners better select fairness algorithms for their models.

View on arXiv
Comments on this paper