Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1911.02508
Cited By
Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods
6 November 2019
Dylan Slack
Sophie Hilgard
Emily Jia
Sameer Singh
Himabindu Lakkaraju
FAtt
AAML
MLAU
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods"
50 / 133 papers shown
Title
On the Robustness of Explanations of Deep Neural Network Models: A Survey
Amlan Jyoti
Karthik Balaji Ganesh
Manoj Gayala
Nandita Lakshmi Tunuguntla
Sandesh Kamath
V. Balasubramanian
XAI
FAtt
AAML
32
4
0
09 Nov 2022
Logic-Based Explainability in Machine Learning
Sasha Rubin
LRM
XAI
50
39
0
24 Oct 2022
Greybox XAI: a Neural-Symbolic learning framework to produce interpretable predictions for image classification
Adrien Bennetot
Gianni Franchi
Javier Del Ser
Raja Chatila
Natalia Díaz Rodríguez
AAML
32
29
0
26 Sep 2022
Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-Making
Jakob Schoeffer
Maria De-Arteaga
Niklas Kuehl
FaML
45
46
0
23 Sep 2022
EMaP: Explainable AI with Manifold-based Perturbations
Minh Nhat Vu
Huy Mai
My T. Thai
AAML
35
2
0
18 Sep 2022
Macroeconomic Predictions using Payments Data and Machine Learning
James T. E. Chapman
Ajit Desai
16
15
0
02 Sep 2022
Explainable Artificial Intelligence Applications in Cyber Security: State-of-the-Art in Research
Zhibo Zhang
H. A. Hamadi
Ernesto Damiani
C. Yeun
Fatma Taher
AAML
32
148
0
31 Aug 2022
Interpretable (not just posthoc-explainable) medical claims modeling for discharge placement to prevent avoidable all-cause readmissions or death
Joshua C. Chang
Ted L. Chang
Carson C. Chow
R. Mahajan
Sonya Mahajan
Joe Maisog
Shashaank Vattikuti
Hongjing Xia
FAtt
OOD
37
0
0
28 Aug 2022
Explainable AI for tailored electricity consumption feedback -- an experimental evaluation of visualizations
Jacqueline Wastensteiner
T. Weiß
Felix Haag
K. Hopf
25
11
0
24 Aug 2022
Augmented cross-selling through explainable AI -- a case from energy retailing
Felix Haag
K. Hopf
Pedro Menelau Vasconcelos
Thorsten Staake
26
4
0
24 Aug 2022
Review of Natural Language Processing in Pharmacology
D. Trajanov
Vangel Trajkovski
Makedonka Dimitrieva
Jovana Dobreva
Milos Jovanovik
Matej Klemen
Alevs vZagar
Marko Robnik-vSikonja
LM&MA
23
7
0
22 Aug 2022
Equivariant and Invariant Grounding for Video Question Answering
Yicong Li
Xiang Wang
Junbin Xiao
Tat-Seng Chua
20
25
0
26 Jul 2022
BASED-XAI: Breaking Ablation Studies Down for Explainable Artificial Intelligence
Isha Hameed
Samuel Sharpe
Daniel Barcklow
Justin Au-yeung
Sahil Verma
Jocelyn Huang
Brian Barr
C. Bayan Bruss
35
14
0
12 Jul 2022
On Computing Relevant Features for Explaining NBCs
Yacine Izza
Sasha Rubin
36
5
0
11 Jul 2022
"Explanation" is Not a Technical Term: The Problem of Ambiguity in XAI
Leilani H. Gilpin
Andrew R. Paley
M. A. Alam
Sarah Spurlock
Kristian J. Hammond
XAI
26
6
0
27 Jun 2022
Eliminating The Impossible, Whatever Remains Must Be True
Jinqiang Yu
Alexey Ignatiev
Peter J. Stuckey
Nina Narodytska
Sasha Rubin
19
23
0
20 Jun 2022
Interpretable Models Capable of Handling Systematic Missingness in Imbalanced Classes and Heterogeneous Datasets
Sreejita Ghosh
E. Baranowski
Michael Biehl
W. Arlt
Peter Tiño
the United Kingdom Utrecht University
23
6
0
04 Jun 2022
Attribution-based Explanations that Provide Recourse Cannot be Robust
H. Fokkema
R. D. Heide
T. Erven
FAtt
47
18
0
31 May 2022
Fool SHAP with Stealthily Biased Sampling
Gabriel Laberge
Ulrich Aïvodji
Satoshi Hara
M. Marchand
Foutse Khomh
MLAU
AAML
FAtt
10
2
0
30 May 2022
Unfooling Perturbation-Based Post Hoc Explainers
Zachariah Carmichael
Walter J. Scheirer
AAML
60
14
0
29 May 2022
Neural Basis Models for Interpretability
Filip Radenovic
Abhimanyu Dubey
D. Mahajan
FAtt
32
46
0
27 May 2022
Prototype Based Classification from Hierarchy to Fairness
Mycal Tucker
J. Shah
FaML
16
6
0
27 May 2022
The Solvability of Interpretability Evaluation Metrics
Yilun Zhou
J. Shah
70
8
0
18 May 2022
Sparse Visual Counterfactual Explanations in Image Space
Valentyn Boreiko
Maximilian Augustin
Francesco Croce
Philipp Berens
Matthias Hein
BDL
CML
30
26
0
16 May 2022
Can Rationalization Improve Robustness?
Howard Chen
Jacqueline He
Karthik Narasimhan
Danqi Chen
AAML
23
40
0
25 Apr 2022
Backdooring Explainable Machine Learning
Maximilian Noppel
Lukas Peter
Christian Wressnegger
AAML
16
5
0
20 Apr 2022
Marrying Fairness and Explainability in Supervised Learning
Przemyslaw A. Grabowicz
Nicholas Perello
Aarshee Mishra
FaML
46
43
0
06 Apr 2022
Interpretation of Black Box NLP Models: A Survey
Shivani Choudhary
N. Chatterjee
S. K. Saha
FAtt
34
10
0
31 Mar 2022
Robustness and Usefulness in AI Explanation Methods
Erick Galinkin
FAtt
28
1
0
07 Mar 2022
Towards a Responsible AI Development Lifecycle: Lessons From Information Security
Erick Galinkin
SILM
19
6
0
06 Mar 2022
Sensing accident-prone features in urban scenes for proactive driving and accident prevention
Sumit Mishra
Praveenbalaji Rajendran
L. Vecchietti
Dongsoo Har
19
13
0
25 Feb 2022
Margin-distancing for safe model explanation
Tom Yan
Chicheng Zhang
28
3
0
23 Feb 2022
Analogies and Feature Attributions for Model Agnostic Explanation of Similarity Learners
K. Ramamurthy
Amit Dhurandhar
Dennis L. Wei
Zaid Bin Tariq
FAtt
38
3
0
02 Feb 2022
Post-Hoc Explanations Fail to Achieve their Purpose in Adversarial Contexts
Sebastian Bordt
Michèle Finck
Eric Raidl
U. V. Luxburg
AILaw
39
77
0
25 Jan 2022
GPEX, A Framework For Interpreting Artificial Neural Networks
Amir Akbarnejad
G. Bigras
Nilanjan Ray
47
4
0
18 Dec 2021
LIMEcraft: Handcrafted superpixel selection and inspection for Visual eXplanations
Weronika Hryniewska
Adrianna Grudzieñ
P. Biecek
FAtt
53
3
0
15 Nov 2021
Explainable AI (XAI): A Systematic Meta-Survey of Current Challenges and Future Opportunities
Waddah Saeed
C. Omlin
XAI
36
414
0
11 Nov 2021
Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis
Thomas Fel
Rémi Cadène
Mathieu Chalvidal
Matthieu Cord
David Vigouroux
Thomas Serre
MLAU
FAtt
AAML
114
58
0
07 Nov 2021
Designing Inherently Interpretable Machine Learning Models
Agus Sudjianto
Aijun Zhang
FaML
13
31
0
02 Nov 2021
A Survey on the Robustness of Feature Importance and Counterfactual Explanations
Saumitra Mishra
Sanghamitra Dutta
Jason Long
Daniele Magazzeni
AAML
14
58
0
30 Oct 2021
Unpacking the Black Box: Regulating Algorithmic Decisions
Laura Blattner
Scott Nelson
Jann Spiess
MLAU
FaML
28
19
0
05 Oct 2021
DeepAID: Interpreting and Improving Deep Learning-based Anomaly Detection in Security Applications
Dongqi Han
Zhiliang Wang
Wenqi Chen
Ying Zhong
Su Wang
Han Zhang
Jiahai Yang
Xingang Shi
Xia Yin
AAML
24
76
0
23 Sep 2021
Learning Predictive and Interpretable Timeseries Summaries from ICU Data
Nari Johnson
S. Parbhoo
A. Ross
Finale Doshi-Velez
AI4TS
27
7
0
22 Sep 2021
Attributing Fair Decisions with Attention Interventions
Ninareh Mehrabi
Umang Gupta
Fred Morstatter
Greg Ver Steeg
Aram Galstyan
32
21
0
08 Sep 2021
Model Explanations via the Axiomatic Causal Lens
Gagan Biradar
Vignesh Viswanathan
Yair Zick
XAI
CML
25
1
0
08 Sep 2021
On the Veracity of Local, Model-agnostic Explanations in Audio Classification: Targeted Investigations with Adversarial Examples
Verena Praher
Katharina Prinz
A. Flexer
Gerhard Widmer
AAML
FAtt
11
9
0
19 Jul 2021
Knowledge-Grounded Self-Rationalization via Extractive and Natural Language Explanations
Bodhisattwa Prasad Majumder
Oana-Maria Camburu
Thomas Lukasiewicz
Julian McAuley
25
35
0
25 Jun 2021
What will it take to generate fairness-preserving explanations?
Jessica Dai
Sohini Upadhyay
Stephen H. Bach
Himabindu Lakkaraju
FAtt
FaML
15
14
0
24 Jun 2021
On Locality of Local Explanation Models
Sahra Ghalebikesabi
Lucile Ter-Minassian
Karla Diaz-Ordaz
Chris Holmes
FedML
FAtt
26
39
0
24 Jun 2021
Synthetic Benchmarks for Scientific Research in Explainable Machine Learning
Yang Liu
Sujay Khandagale
Colin White
W. Neiswanger
37
65
0
23 Jun 2021
Previous
1
2
3
Next