Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2201.10295
Cited By
Post-Hoc Explanations Fail to Achieve their Purpose in Adversarial Contexts
25 January 2022
Sebastian Bordt
Michèle Finck
Eric Raidl
U. V. Luxburg
AILaw
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Post-Hoc Explanations Fail to Achieve their Purpose in Adversarial Contexts"
35 / 35 papers shown
Title
DiCE-Extended: A Robust Approach to Counterfactual Explanations in Machine Learning
Volkan Bakir
Polat Goktas
Sureyya Akyuz
52
0
0
26 Apr 2025
Interpretable Machine Learning in Physics: A Review
Sebastian Johann Wetzel
Seungwoong Ha
Raban Iten
Miriam Klopotek
Ziming Liu
AI4CE
80
0
0
30 Mar 2025
Concept Layers: Enhancing Interpretability and Intervenability via LLM Conceptualization
Or Raphael Bidusa
Shaul Markovitch
61
0
0
20 Feb 2025
ExpProof : Operationalizing Explanations for Confidential Models with ZKPs
Chhavi Yadav
Evan Monroe Laufer
Dan Boneh
Kamalika Chaudhuri
91
0
0
06 Feb 2025
The explanation dialogues: an expert focus study to understand requirements towards explanations within the GDPR
Laura State
Alejandra Bringas Colmenarejo
Andrea Beretta
Salvatore Ruggieri
Franco Turini
Stephanie Law
AILaw
ELM
41
0
0
10 Jan 2025
Unlearning-based Neural Interpretations
Ching Lam Choi
Alexandre Duplessis
Serge Belongie
FAtt
44
0
0
10 Oct 2024
Explainable AI needs formal notions of explanation correctness
Stefan Haufe
Rick Wilming
Benedict Clark
Rustam Zhumagambetov
Danny Panknin
Ahcène Boubekki
XAI
31
1
0
22 Sep 2024
Deep Knowledge-Infusion For Explainable Depression Detection
Sumit Dalal
Sarika Jain
M. Dave
28
2
0
01 Sep 2024
Auditing Local Explanations is Hard
Robi Bhattacharjee
U. V. Luxburg
LRM
MLAU
FAtt
41
2
0
18 Jul 2024
Why do explanations fail? A typology and discussion on failures in XAI
Clara Bove
Thibault Laugel
Marie-Jeanne Lesot
C. Tijus
Marcin Detyniecki
31
2
0
22 May 2024
Why You Should Not Trust Interpretations in Machine Learning: Adversarial Attacks on Partial Dependence Plots
Xi Xin
Giles Hooker
Fei Huang
AAML
38
6
0
29 Apr 2024
Global Concept Explanations for Graphs by Contrastive Learning
Jonas Teufel
Pascal Friederich
38
1
0
25 Apr 2024
X Hacking: The Threat of Misguided AutoML
Rahul Sharma
Sergey Redyuk
Sumantrak Mukherjee
Andrea Sipka
Sebastian Vollmer
David Selby
28
2
0
16 Jan 2024
A Cross Attention Approach to Diagnostic Explainability using Clinical Practice Guidelines for Depression
Sumit Dalal
Deepa Tilwani
Kaushik Roy
Manas Gaur
Sarika Jain
V. Shalin
Amit P. Sheth
24
6
0
23 Nov 2023
On the Relationship Between Interpretability and Explainability in Machine Learning
Benjamin Leblanc
Pascal Germain
FaML
26
0
0
20 Nov 2023
How Well Do Feature-Additive Explainers Explain Feature-Additive Predictors?
Zachariah Carmichael
Walter J. Scheirer
FAtt
39
4
0
27 Oct 2023
Pixel-Grounded Prototypical Part Networks
Zachariah Carmichael
Suhas Lohit
A. Cherian
Michael Jeffrey Jones
Walter J. Scheirer
38
11
0
25 Sep 2023
LLMs Understand Glass-Box Models, Discover Surprises, and Suggest Repairs
Ben Lengerich
Sebastian Bordt
Harsha Nori
M. Nunnally
Y. Aphinyanaphongs
Manolis Kellis
Rich Caruana
26
7
0
02 Aug 2023
Manipulation Risks in Explainable AI: The Implications of the Disagreement Problem
S. Goethals
David Martens
Theodoros Evgeniou
36
4
0
24 Jun 2023
The Case Against Explainability
Hofit Wasserman Rozen
N. Elkin-Koren
Ran Gilad-Bachrach
AILaw
ELM
26
1
0
20 May 2023
Disagreement amongst counterfactual explanations: How transparency can be deceptive
Dieter Brughmans
Lissa Melis
David Martens
26
3
0
25 Apr 2023
Explainability in AI Policies: A Critical Review of Communications, Reports, Regulations, and Standards in the EU, US, and UK
L. Nannini
Agathe Balayn
A. Smith
21
37
0
20 Apr 2023
Mind the Gap! Bridging Explainable Artificial Intelligence and Human Understanding with Luhmann's Functional Theory of Communication
B. Keenan
Kacper Sokol
16
7
0
07 Feb 2023
COmic: Convolutional Kernel Networks for Interpretable End-to-End Learning on (Multi-)Omics Data
Jonas C. Ditz
Bernhard Reuter
Nícolas Pfeifer
24
1
0
02 Dec 2022
Interpretable Geometric Deep Learning via Learnable Randomness Injection
Siqi Miao
Yunan Luo
Miaoyuan Liu
Pan Li
24
25
0
30 Oct 2022
From Shapley Values to Generalized Additive Models and back
Sebastian Bordt
U. V. Luxburg
FAtt
TDI
74
35
0
08 Sep 2022
Interpretable (not just posthoc-explainable) medical claims modeling for discharge placement to prevent avoidable all-cause readmissions or death
Joshua C. Chang
Ted L. Chang
Carson C. Chow
R. Mahajan
Sonya Mahajan
Joe Maisog
Shashaank Vattikuti
Hongjing Xia
FAtt
OOD
37
0
0
28 Aug 2022
A Means-End Account of Explainable Artificial Intelligence
O. Buchholz
XAI
29
12
0
09 Aug 2022
Attribution-based Explanations that Provide Recourse Cannot be Robust
H. Fokkema
R. D. Heide
T. Erven
FAtt
44
18
0
31 May 2022
Unfooling Perturbation-Based Post Hoc Explainers
Zachariah Carmichael
Walter J. Scheirer
AAML
57
14
0
29 May 2022
Benchmarking Instance-Centric Counterfactual Algorithms for XAI: From White Box to Black Box
Catarina Moreira
Yu-Liang Chou
Chih-Jou Hsieh
Chun Ouyang
Joaquim A. Jorge
João Pereira
CML
27
9
0
04 Mar 2022
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
Satyapriya Krishna
Tessa Han
Alex Gu
Steven Wu
S. Jabbari
Himabindu Lakkaraju
177
186
0
03 Feb 2022
Convolutional Motif Kernel Networks
Jonas C. Ditz
Bernhard Reuter
N. Pfeifer
FAtt
10
2
0
03 Nov 2021
What Do We Want From Explainable Artificial Intelligence (XAI)? -- A Stakeholder Perspective on XAI and a Conceptual Model Guiding Interdisciplinary XAI Research
Markus Langer
Daniel Oster
Timo Speith
Holger Hermanns
Lena Kästner
Eva Schmidt
Andreas Sesing
Kevin Baum
XAI
62
416
0
15 Feb 2021
Counterfactual Explanations and Algorithmic Recourses for Machine Learning: A Review
Sahil Verma
Varich Boonsanong
Minh Hoang
Keegan E. Hines
John P. Dickerson
Chirag Shah
CML
24
162
0
20 Oct 2020
1