Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2008.05030
Cited By
v1
v2
v3
v4 (latest)
Reliable Post hoc Explanations: Modeling Uncertainty in Explainability
11 August 2020
Dylan Slack
Sophie Hilgard
Sameer Singh
Himabindu Lakkaraju
FAtt
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Reliable Post hoc Explanations: Modeling Uncertainty in Explainability"
48 / 48 papers shown
Title
Feature Importance Depends on Properties of the Data: Towards Choosing the Correct Explanations for Your Data and Decision Trees based Models
Célia Wafa Ayad
Thomas Bonnier
Benjamin Bosch
Sonali Parbhoo
Jesse Read
FAtt
XAI
139
0
0
11 Feb 2025
Counterfactuals As a Means for Evaluating Faithfulness of Attribution Methods in Autoregressive Language Models
Sepehr Kamahi
Yadollah Yaghoobzadeh
89
0
0
21 Aug 2024
Efficient and Accurate Explanation Estimation with Distribution Compression
Hubert Baniecki
Giuseppe Casalicchio
Bernd Bischl
Przemyslaw Biecek
FAtt
99
4
0
26 Jun 2024
Uncertainty Quantification for Gradient-based Explanations in Neural Networks
Mihir Mulye
Matias Valdenegro-Toro
UQCV
FAtt
95
0
0
25 Mar 2024
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
Satyapriya Krishna
Tessa Han
Alex Gu
Steven Wu
S. Jabbari
Himabindu Lakkaraju
246
195
0
03 Feb 2022
Probabilistic Sufficient Explanations
Eric Wang
Pasha Khosravi
Guy Van den Broeck
XAI
FAtt
TPM
150
24
0
21 May 2021
Towards the Unification and Robustness of Perturbation and Gradient Based Explanations
Sushant Agarwal
S. Jabbari
Chirag Agarwal
Sohini Upadhyay
Zhiwei Steven Wu
Himabindu Lakkaraju
FAtt
AAML
66
63
0
21 Feb 2021
BayLIME: Bayesian Local Interpretable Model-Agnostic Explanations
Xingyu Zhao
Wei Huang
Xiaowei Huang
Valentin Robu
David Flynn
FAtt
78
94
0
05 Dec 2020
Improving KernelSHAP: Practical Shapley Value Estimation via Linear Regression
Ian Covert
Su-In Lee
FAtt
56
171
0
02 Dec 2020
Gradient-based Analysis of NLP Models is Manipulable
Junlin Wang
Jens Tuyls
Eric Wallace
Sameer Singh
AAML
FAtt
65
58
0
12 Oct 2020
How Much Can I Trust You? -- Quantifying Uncertainties in Explaining Neural Networks
Kirill Bykov
Marina M.-C. Höhne
Klaus-Robert Muller
Shinichi Nakajima
Marius Kloft
UQCV
FAtt
91
31
0
16 Jun 2020
On Tractable Representations of Binary Neural Networks
Weijia Shi
Andy Shih
Adnan Darwiche
Arthur Choi
TPM
OffRL
41
69
0
05 Apr 2020
"How do I fool you?": Manipulating User Trust via Misleading Black Box Explanations
Himabindu Lakkaraju
Osbert Bastani
65
255
0
15 Nov 2019
Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods
Dylan Slack
Sophie Hilgard
Emily Jia
Sameer Singh
Himabindu Lakkaraju
FAtt
AAML
MLAU
77
819
0
06 Nov 2019
bLIMEy: Surrogate Prediction Explanations Beyond LIME
Kacper Sokol
Alexander Hepburn
Raúl Santos-Rodríguez
Peter A. Flach
FAtt
121
38
0
29 Oct 2019
CXPlain: Causal Explanations for Model Interpretation under Uncertainty
Patrick Schwab
W. Karlen
FAtt
CML
120
209
0
27 Oct 2019
Can I Trust the Explainer? Verifying Post-hoc Explanatory Methods
Oana-Maria Camburu
Eleonora Giunchiglia
Jakob N. Foerster
Thomas Lukasiewicz
Phil Blunsom
FAtt
AAML
70
61
0
04 Oct 2019
DLIME: A Deterministic Local Interpretable Model-Agnostic Explanations Approach for Computer-Aided Diagnosis Systems
Muhammad Rehman Zafar
N. Khan
FAtt
109
157
0
24 Jun 2019
Explanations can be manipulated and geometry is to blame
Ann-Kathrin Dombrowski
Maximilian Alber
Christopher J. Anders
M. Ackermann
K. Müller
Pan Kessel
AAML
FAtt
81
334
0
19 Jun 2019
Certifiably Robust Interpretation in Deep Learning
Alexander Levine
Sahil Singla
Soheil Feizi
FAtt
AAML
65
64
0
28 May 2019
Fooling Neural Network Interpretations via Adversarial Model Manipulation
Juyeon Heo
Sunghwan Joo
Taesup Moon
AAML
FAtt
95
204
0
06 Feb 2019
On the (In)fidelity and Sensitivity for Explanations
Chih-Kuan Yeh
Cheng-Yu Hsieh
A. Suggala
David I. Inouye
Pradeep Ravikumar
FAtt
75
454
0
27 Jan 2019
Abduction-Based Explanations for Machine Learning Models
Alexey Ignatiev
Nina Narodytska
Sasha Rubin
FAtt
57
226
0
26 Nov 2018
Explaining Deep Learning Models - A Bayesian Non-parametric Approach
Wenbo Guo
Sui Huang
Yunzhe Tao
Masashi Sugiyama
Lin Lin
BDL
40
47
0
07 Nov 2018
Sanity Checks for Saliency Maps
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
FAtt
AAML
XAI
141
1,970
0
08 Oct 2018
Actionable Recourse in Linear Classification
Berk Ustun
Alexander Spangher
Yang Liu
FaML
121
550
0
18 Sep 2018
L-Shapley and C-Shapley: Efficient Model Interpretation for Structured Data
Jianbo Chen
Le Song
Martin J. Wainwright
Michael I. Jordan
FAtt
TDI
113
216
0
08 Aug 2018
Model Agnostic Supervised Local Explanations
Gregory Plumb
Denali Molitor
Ameet Talwalkar
FAtt
LRM
MILM
121
199
0
09 Jul 2018
On the Robustness of Interpretability Methods
David Alvarez-Melis
Tommi Jaakkola
79
528
0
21 Jun 2018
A Symbolic Approach to Explaining Bayesian Network Classifiers
Andy Shih
Arthur Choi
Adnan Darwiche
FAtt
74
243
0
09 May 2018
Manipulating and Measuring Model Interpretability
Forough Poursabzi-Sangdeh
D. Goldstein
Jake M. Hofman
Jennifer Wortman Vaughan
Hanna M. Wallach
91
698
0
21 Feb 2018
Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR
Sandra Wachter
Brent Mittelstadt
Chris Russell
MLAU
115
2,360
0
01 Nov 2017
Interpretation of Neural Networks is Fragile
Amirata Ghorbani
Abubakar Abid
James Zou
FAtt
AAML
133
867
0
29 Oct 2017
Interpretability via Model Extraction
Osbert Bastani
Carolyn Kim
Hamsa Bastani
FAtt
55
129
0
29 Jun 2017
SmoothGrad: removing noise by adding noise
D. Smilkov
Nikhil Thorat
Been Kim
F. Viégas
Martin Wattenberg
FAtt
ODL
204
2,226
0
12 Jun 2017
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
1.1K
21,939
0
22 May 2017
Understanding Black-box Predictions via Influence Functions
Pang Wei Koh
Percy Liang
TDI
213
2,899
0
14 Mar 2017
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OOD
FAtt
188
6,015
0
04 Mar 2017
Mapping chemical performance on molecular structures using locally interpretable explanations
Leanne S. Whitmore
Anthe George
Corey M. Hudson
FAtt
39
12
0
22 Nov 2016
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
Ramprasaath R. Selvaraju
Michael Cogswell
Abhishek Das
Ramakrishna Vedantam
Devi Parikh
Dhruv Batra
FAtt
321
20,070
0
07 Oct 2016
Model-Agnostic Interpretability of Machine Learning
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
86
838
0
16 Jun 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
1.2K
16,990
0
16 Feb 2016
The Bayesian Case Model: A Generative Approach for Case-Based Reasoning and Prototype Classification
Been Kim
Cynthia Rudin
J. Shah
70
321
0
03 Mar 2015
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan
Andrew Zisserman
FAtt
MDE
1.7K
100,479
0
04 Sep 2014
Speeding up Convolutional Neural Networks with Low Rank Expansions
Max Jaderberg
Andrea Vedaldi
Andrew Zisserman
130
1,465
0
15 May 2014
Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation
Emily L. Denton
Wojciech Zaremba
Joan Bruna
Yann LeCun
Rob Fergus
FAtt
177
1,692
0
02 Apr 2014
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
Karen Simonyan
Andrea Vedaldi
Andrew Zisserman
FAtt
312
7,308
0
20 Dec 2013
Supersparse Linear Integer Models for Interpretable Classification
Berk Ustun
Stefano Tracà
Cynthia Rudin
68
43
0
27 Jun 2013
1