Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2010.07389
Cited By
Explainability for fair machine learning
14 October 2020
T. Begley
Tobias Schwedes
Christopher Frye
Ilya Feige
FaML
FedML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Explainability for fair machine learning"
15 / 15 papers shown
Title
Explanations as Bias Detectors: A Critical Study of Local Post-hoc XAI Methods for Fairness Exploration
Vasiliki Papanikou
Danae Pla Karidi
E. Pitoura
Emmanouil Panagiotou
Eirini Ntoutsi
33
0
0
01 May 2025
Constructing Fair Latent Space for Intersection of Fairness and Explainability
Hyungjun Joo
Hyeonggeun Han
Sehwan Kim
Sangwoo Hong
Jungwoo Lee
40
0
0
23 Dec 2024
Procedural Fairness in Machine Learning
Ziming Wang
Changwu Huang
Xin Yao
FaML
50
0
0
02 Apr 2024
REFRESH: Responsible and Efficient Feature Reselection Guided by SHAP Values
Shubham Sharma
Sanghamitra Dutta
Emanuele Albini
Freddy Lecue
Daniele Magazzeni
Manuela Veloso
40
1
0
13 Mar 2024
On Explaining Unfairness: An Overview
Christos Fragkathoulas
Vasiliki Papanikou
Danae Pla Karidi
E. Pitoura
XAI
FaML
19
2
0
16 Feb 2024
ALMANACS: A Simulatability Benchmark for Language Model Explainability
Edmund Mills
Shiye Su
Stuart J. Russell
Scott Emmons
54
7
0
20 Dec 2023
Towards Fair and Calibrated Models
Anand Brahmbhatt
Vipul Rathore
Mausam
Parag Singla
FaML
21
2
0
16 Oct 2023
Function Composition in Trustworthy Machine Learning: Implementation Choices, Insights, and Questions
Manish Nagireddy
Moninder Singh
Samuel C. Hoffman
Evaline Ju
K. Ramamurthy
Kush R. Varshney
32
1
0
17 Feb 2023
Manifestations of Xenophobia in AI Systems
Nenad Tomašev
J. L. Maynard
Iason Gabriel
24
9
0
15 Dec 2022
Tensions Between the Proxies of Human Values in AI
Teresa Datta
D. Nissani
Max Cembalest
Akash Khanna
Haley Massa
John P. Dickerson
34
2
0
14 Dec 2022
Fairness and Explainability: Bridging the Gap Towards Fair Model Explanations
Yuying Zhao
Yu-Chiang Frank Wang
Tyler Derr
FaML
35
13
0
07 Dec 2022
Explainable Global Fairness Verification of Tree-Based Classifiers
Stefano Calzavara
Lorenzo Cazzaro
Claudio Lucchese
Federico Marcuzzi
27
2
0
27 Sep 2022
Challenges in Applying Explainability Methods to Improve the Fairness of NLP Models
Esma Balkir
S. Kiritchenko
I. Nejadgholi
Kathleen C. Fraser
21
36
0
08 Jun 2022
Fool SHAP with Stealthily Biased Sampling
Gabriel Laberge
Ulrich Aïvodji
Satoshi Hara
M. Marchand
Foutse Khomh
MLAU
AAML
FAtt
13
2
0
30 May 2022
Explaining Algorithmic Fairness Through Fairness-Aware Causal Path Decomposition
Weishen Pan
Sen Cui
Jiang Bian
Changshui Zhang
Fei Wang
CML
FaML
11
33
0
11 Aug 2021
1