ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.14282
  4. Cited By
How to be fair? A study of label and selection bias

How to be fair? A study of label and selection bias

21 March 2024
Marco Favier
T. Calders
Sam Pinxteren
Jonathan Meyer
ArXivPDFHTML

Papers citing "How to be fair? A study of label and selection bias"

5 / 5 papers shown
Title
Correcting Annotator Bias in Training Data: Population-Aligned Instance Replication (PAIR)
Correcting Annotator Bias in Training Data: Population-Aligned Instance Replication (PAIR)
Stephanie Eckman
Bolei Ma
Christoph Kern
Rob Chew
Barbara Plank
Frauke Kreuter
41
0
0
12 Jan 2025
Cherry on the Cake: Fairness is NOT an Optimization Problem
Cherry on the Cake: Fairness is NOT an Optimization Problem
Marco Favier
T. Calders
19
1
0
24 Jun 2024
The Pursuit of Fairness in Artificial Intelligence Models: A Survey
The Pursuit of Fairness in Artificial Intelligence Models: A Survey
Tahsin Alamgir Kheya
Mohamed Reda Bouadjenek
Sunil Aryal
36
8
0
26 Mar 2024
Towards the Right Kind of Fairness in AI
Towards the Right Kind of Fairness in AI
Boris Ruf
Marcin Detyniecki
58
26
0
16 Feb 2021
A Survey on Bias and Fairness in Machine Learning
A Survey on Bias and Fairness in Machine Learning
Ninareh Mehrabi
Fred Morstatter
N. Saxena
Kristina Lerman
Aram Galstyan
SyDa
FaML
338
4,230
0
23 Aug 2019
1