ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.07504
  4. Cited By
Characterizing the risk of fairwashing

Characterizing the risk of fairwashing

14 June 2021
Ulrich Aïvodji
Hiromi Arai
Sébastien Gambs
Satoshi Hara
ArXivPDFHTML

Papers citing "Characterizing the risk of fairwashing"

20 / 20 papers shown
Title
Fair Play for Individuals, Foul Play for Groups? Auditing Anonymization's Impact on ML Fairness
Fair Play for Individuals, Foul Play for Groups? Auditing Anonymization's Impact on ML Fairness
Héber H. Arcolezi
Mina Alishahi
Adda-Akram Bendoukha
Nesrine Kaaniche
36
0
0
12 May 2025
The Curious Case of Arbitrariness in Machine Learning
Prakhar Ganesh
Afaf Taik
G. Farnadi
59
2
0
28 Jan 2025
SoK: Taming the Triangle -- On the Interplays between Fairness,
  Interpretability and Privacy in Machine Learning
SoK: Taming the Triangle -- On the Interplays between Fairness, Interpretability and Privacy in Machine Learning
Julien Ferry
Ulrich Aïvodji
Sébastien Gambs
Marie-José Huguet
Mohamed Siala
FaML
26
5
0
22 Dec 2023
A Path to Simpler Models Starts With Noise
A Path to Simpler Models Starts With Noise
Lesia Semenova
Harry Chen
Ronald E. Parr
Cynthia Rudin
33
15
0
30 Oct 2023
A Critical Survey on Fairness Benefits of Explainable AI
A Critical Survey on Fairness Benefits of Explainable AI
Luca Deck
Jakob Schoeffer
Maria De-Arteaga
Niklas Kühl
34
11
0
15 Oct 2023
Probabilistic Dataset Reconstruction from Interpretable Models
Probabilistic Dataset Reconstruction from Interpretable Models
Julien Ferry
Ulrich Aïvodji
Sébastien Gambs
Marie-José Huguet
Mohamed Siala
24
5
0
29 Aug 2023
Manipulation Risks in Explainable AI: The Implications of the
  Disagreement Problem
Manipulation Risks in Explainable AI: The Implications of the Disagreement Problem
S. Goethals
David Martens
Theodoros Evgeniou
36
4
0
24 Jun 2023
Adversarial Attacks on the Interpretation of Neuron Activation
  Maximization
Adversarial Attacks on the Interpretation of Neuron Activation Maximization
Géraldin Nanfack
A. Fulleringer
Jonathan Marty
Michael Eickenberg
Eugene Belilovsky
AAML
FAtt
25
10
0
12 Jun 2023
Adversarial attacks and defenses in explainable artificial intelligence:
  A survey
Adversarial attacks and defenses in explainable artificial intelligence: A survey
Hubert Baniecki
P. Biecek
AAML
42
63
0
06 Jun 2023
On the relevance of APIs facing fairwashed audits
On the relevance of APIs facing fairwashed audits
Jade Garcia Bourrée
Erwan Le Merrer
Gilles Tredan
Benoit Rottembourg
MLAU
17
0
0
23 May 2023
Connecting the Dots in Trustworthy Artificial Intelligence: From AI
  Principles, Ethics, and Key Requirements to Responsible AI Systems and
  Regulation
Connecting the Dots in Trustworthy Artificial Intelligence: From AI Principles, Ethics, and Key Requirements to Responsible AI Systems and Regulation
Natalia Díaz Rodríguez
Javier Del Ser
Mark Coeckelbergh
Marcos López de Prado
E. Herrera-Viedma
Francisco Herrera
XAI
27
264
0
02 May 2023
Why is plausibility surprisingly problematic as an XAI criterion?
Why is plausibility surprisingly problematic as an XAI criterion?
Weina Jin
Xiaoxiao Li
Ghassan Hamarneh
49
3
0
30 Mar 2023
Learning Hybrid Interpretable Models: Theory, Taxonomy, and Methods
Learning Hybrid Interpretable Models: Theory, Taxonomy, and Methods
Julien Ferry
Gabriel Laberge
Ulrich Aïvodji
28
5
0
08 Mar 2023
Tensions Between the Proxies of Human Values in AI
Tensions Between the Proxies of Human Values in AI
Teresa Datta
D. Nissani
Max Cembalest
Akash Khanna
Haley Massa
John P. Dickerson
34
2
0
14 Dec 2022
The Road to Explainability is Paved with Bias: Measuring the Fairness of
  Explanations
The Road to Explainability is Paved with Bias: Measuring the Fairness of Explanations
Aparna Balagopalan
Haoran Zhang
Kimia Hamidieh
Thomas Hartvigsen
Frank Rudzicz
Marzyeh Ghassemi
38
77
0
06 May 2022
Computing the Collection of Good Models for Rule Lists
Computing the Collection of Good Models for Rule Lists
Kota Mata
Kentaro Kanamori
Hiroki Arimura
11
7
0
24 Apr 2022
When and How to Fool Explainable Models (and Humans) with Adversarial
  Examples
When and How to Fool Explainable Models (and Humans) with Adversarial Examples
Jon Vadillo
Roberto Santana
Jose A. Lozano
SILM
AAML
36
11
0
05 Jul 2021
Counterfactual Explanations and Algorithmic Recourses for Machine
  Learning: A Review
Counterfactual Explanations and Algorithmic Recourses for Machine Learning: A Review
Sahil Verma
Varich Boonsanong
Minh Hoang
Keegan E. Hines
John P. Dickerson
Chirag Shah
CML
24
162
0
20 Oct 2020
Learning Certifiably Optimal Rule Lists for Categorical Data
Learning Certifiably Optimal Rule Lists for Categorical Data
E. Angelino
Nicholas Larus-Stone
Daniel Alabi
Margo Seltzer
Cynthia Rudin
48
195
0
06 Apr 2017
Fair prediction with disparate impact: A study of bias in recidivism
  prediction instruments
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
Alexandra Chouldechova
FaML
207
2,084
0
24 Oct 2016
1