ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2011.00603
  4. Cited By
Making ML models fairer through explanations: the case of LimeOut

Making ML models fairer through explanations: the case of LimeOut

1 November 2020
Guilherme Alves
Vaishnavi Bhargava
Miguel Couceiro
A. Napoli
    FaML
ArXivPDFHTML

Papers citing "Making ML models fairer through explanations: the case of LimeOut"

4 / 4 papers shown
Title
Explanations as Bias Detectors: A Critical Study of Local Post-hoc XAI Methods for Fairness Exploration
Explanations as Bias Detectors: A Critical Study of Local Post-hoc XAI Methods for Fairness Exploration
Vasiliki Papanikou
Danae Pla Karidi
E. Pitoura
Emmanouil Panagiotou
Eirini Ntoutsi
33
0
0
01 May 2025
Explanations, Fairness, and Appropriate Reliance in Human-AI
  Decision-Making
Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-Making
Jakob Schoeffer
Maria De-Arteaga
Niklas Kuehl
FaML
45
46
0
23 Sep 2022
Fairness of Machine Learning Algorithms in Demography
Fairness of Machine Learning Algorithms in Demography
I. Emmanuel
E. Mitrofanova
FaML
14
0
0
02 Feb 2022
Fair prediction with disparate impact: A study of bias in recidivism
  prediction instruments
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
Alexandra Chouldechova
FaML
207
2,090
0
24 Oct 2016
1