ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2008.05030
  4. Cited By
Reliable Post hoc Explanations: Modeling Uncertainty in Explainability
v1v2v3v4 (latest)

Reliable Post hoc Explanations: Modeling Uncertainty in Explainability

11 August 2020
Dylan Slack
Sophie Hilgard
Sameer Singh
Himabindu Lakkaraju
    FAtt
ArXiv (abs)PDFHTML

Papers citing "Reliable Post hoc Explanations: Modeling Uncertainty in Explainability"

48 / 48 papers shown
Title
Feature Importance Depends on Properties of the Data: Towards Choosing the Correct Explanations for Your Data and Decision Trees based Models
Feature Importance Depends on Properties of the Data: Towards Choosing the Correct Explanations for Your Data and Decision Trees based Models
Célia Wafa Ayad
Thomas Bonnier
Benjamin Bosch
Sonali Parbhoo
Jesse Read
FAttXAI
139
0
0
11 Feb 2025
Counterfactuals As a Means for Evaluating Faithfulness of Attribution Methods in Autoregressive Language Models
Counterfactuals As a Means for Evaluating Faithfulness of Attribution Methods in Autoregressive Language Models
Sepehr Kamahi
Yadollah Yaghoobzadeh
89
0
0
21 Aug 2024
Efficient and Accurate Explanation Estimation with Distribution Compression
Efficient and Accurate Explanation Estimation with Distribution Compression
Hubert Baniecki
Giuseppe Casalicchio
Bernd Bischl
Przemyslaw Biecek
FAtt
99
4
0
26 Jun 2024
Uncertainty Quantification for Gradient-based Explanations in Neural Networks
Uncertainty Quantification for Gradient-based Explanations in Neural Networks
Mihir Mulye
Matias Valdenegro-Toro
UQCVFAtt
95
0
0
25 Mar 2024
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
Satyapriya Krishna
Tessa Han
Alex Gu
Steven Wu
S. Jabbari
Himabindu Lakkaraju
246
195
0
03 Feb 2022
Probabilistic Sufficient Explanations
Probabilistic Sufficient Explanations
Eric Wang
Pasha Khosravi
Guy Van den Broeck
XAIFAttTPM
150
24
0
21 May 2021
Towards the Unification and Robustness of Perturbation and Gradient
  Based Explanations
Towards the Unification and Robustness of Perturbation and Gradient Based Explanations
Sushant Agarwal
S. Jabbari
Chirag Agarwal
Sohini Upadhyay
Zhiwei Steven Wu
Himabindu Lakkaraju
FAttAAML
66
63
0
21 Feb 2021
BayLIME: Bayesian Local Interpretable Model-Agnostic Explanations
BayLIME: Bayesian Local Interpretable Model-Agnostic Explanations
Xingyu Zhao
Wei Huang
Xiaowei Huang
Valentin Robu
David Flynn
FAtt
78
94
0
05 Dec 2020
Improving KernelSHAP: Practical Shapley Value Estimation via Linear
  Regression
Improving KernelSHAP: Practical Shapley Value Estimation via Linear Regression
Ian Covert
Su-In Lee
FAtt
56
171
0
02 Dec 2020
Gradient-based Analysis of NLP Models is Manipulable
Gradient-based Analysis of NLP Models is Manipulable
Junlin Wang
Jens Tuyls
Eric Wallace
Sameer Singh
AAMLFAtt
65
58
0
12 Oct 2020
How Much Can I Trust You? -- Quantifying Uncertainties in Explaining
  Neural Networks
How Much Can I Trust You? -- Quantifying Uncertainties in Explaining Neural Networks
Kirill Bykov
Marina M.-C. Höhne
Klaus-Robert Muller
Shinichi Nakajima
Marius Kloft
UQCVFAtt
91
31
0
16 Jun 2020
On Tractable Representations of Binary Neural Networks
On Tractable Representations of Binary Neural Networks
Weijia Shi
Andy Shih
Adnan Darwiche
Arthur Choi
TPMOffRL
41
69
0
05 Apr 2020
"How do I fool you?": Manipulating User Trust via Misleading Black Box
  Explanations
"How do I fool you?": Manipulating User Trust via Misleading Black Box Explanations
Himabindu Lakkaraju
Osbert Bastani
65
255
0
15 Nov 2019
Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation
  Methods
Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods
Dylan Slack
Sophie Hilgard
Emily Jia
Sameer Singh
Himabindu Lakkaraju
FAttAAMLMLAU
77
819
0
06 Nov 2019
bLIMEy: Surrogate Prediction Explanations Beyond LIME
bLIMEy: Surrogate Prediction Explanations Beyond LIME
Kacper Sokol
Alexander Hepburn
Raúl Santos-Rodríguez
Peter A. Flach
FAtt
121
38
0
29 Oct 2019
CXPlain: Causal Explanations for Model Interpretation under Uncertainty
CXPlain: Causal Explanations for Model Interpretation under Uncertainty
Patrick Schwab
W. Karlen
FAttCML
120
209
0
27 Oct 2019
Can I Trust the Explainer? Verifying Post-hoc Explanatory Methods
Can I Trust the Explainer? Verifying Post-hoc Explanatory Methods
Oana-Maria Camburu
Eleonora Giunchiglia
Jakob N. Foerster
Thomas Lukasiewicz
Phil Blunsom
FAttAAML
70
61
0
04 Oct 2019
DLIME: A Deterministic Local Interpretable Model-Agnostic Explanations
  Approach for Computer-Aided Diagnosis Systems
DLIME: A Deterministic Local Interpretable Model-Agnostic Explanations Approach for Computer-Aided Diagnosis Systems
Muhammad Rehman Zafar
N. Khan
FAtt
109
157
0
24 Jun 2019
Explanations can be manipulated and geometry is to blame
Explanations can be manipulated and geometry is to blame
Ann-Kathrin Dombrowski
Maximilian Alber
Christopher J. Anders
M. Ackermann
K. Müller
Pan Kessel
AAMLFAtt
81
334
0
19 Jun 2019
Certifiably Robust Interpretation in Deep Learning
Certifiably Robust Interpretation in Deep Learning
Alexander Levine
Sahil Singla
Soheil Feizi
FAttAAML
65
64
0
28 May 2019
Fooling Neural Network Interpretations via Adversarial Model
  Manipulation
Fooling Neural Network Interpretations via Adversarial Model Manipulation
Juyeon Heo
Sunghwan Joo
Taesup Moon
AAMLFAtt
95
204
0
06 Feb 2019
On the (In)fidelity and Sensitivity for Explanations
On the (In)fidelity and Sensitivity for Explanations
Chih-Kuan Yeh
Cheng-Yu Hsieh
A. Suggala
David I. Inouye
Pradeep Ravikumar
FAtt
75
454
0
27 Jan 2019
Abduction-Based Explanations for Machine Learning Models
Abduction-Based Explanations for Machine Learning Models
Alexey Ignatiev
Nina Narodytska
Sasha Rubin
FAtt
57
226
0
26 Nov 2018
Explaining Deep Learning Models - A Bayesian Non-parametric Approach
Explaining Deep Learning Models - A Bayesian Non-parametric Approach
Wenbo Guo
Sui Huang
Yunzhe Tao
Masashi Sugiyama
Lin Lin
BDL
40
47
0
07 Nov 2018
Sanity Checks for Saliency Maps
Sanity Checks for Saliency Maps
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
FAttAAMLXAI
141
1,970
0
08 Oct 2018
Actionable Recourse in Linear Classification
Actionable Recourse in Linear Classification
Berk Ustun
Alexander Spangher
Yang Liu
FaML
121
550
0
18 Sep 2018
L-Shapley and C-Shapley: Efficient Model Interpretation for Structured
  Data
L-Shapley and C-Shapley: Efficient Model Interpretation for Structured Data
Jianbo Chen
Le Song
Martin J. Wainwright
Michael I. Jordan
FAttTDI
113
216
0
08 Aug 2018
Model Agnostic Supervised Local Explanations
Model Agnostic Supervised Local Explanations
Gregory Plumb
Denali Molitor
Ameet Talwalkar
FAttLRMMILM
121
199
0
09 Jul 2018
On the Robustness of Interpretability Methods
On the Robustness of Interpretability Methods
David Alvarez-Melis
Tommi Jaakkola
79
528
0
21 Jun 2018
A Symbolic Approach to Explaining Bayesian Network Classifiers
A Symbolic Approach to Explaining Bayesian Network Classifiers
Andy Shih
Arthur Choi
Adnan Darwiche
FAtt
74
243
0
09 May 2018
Manipulating and Measuring Model Interpretability
Manipulating and Measuring Model Interpretability
Forough Poursabzi-Sangdeh
D. Goldstein
Jake M. Hofman
Jennifer Wortman Vaughan
Hanna M. Wallach
91
698
0
21 Feb 2018
Counterfactual Explanations without Opening the Black Box: Automated
  Decisions and the GDPR
Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR
Sandra Wachter
Brent Mittelstadt
Chris Russell
MLAU
115
2,360
0
01 Nov 2017
Interpretation of Neural Networks is Fragile
Interpretation of Neural Networks is Fragile
Amirata Ghorbani
Abubakar Abid
James Zou
FAttAAML
133
867
0
29 Oct 2017
Interpretability via Model Extraction
Interpretability via Model Extraction
Osbert Bastani
Carolyn Kim
Hamsa Bastani
FAtt
55
129
0
29 Jun 2017
SmoothGrad: removing noise by adding noise
SmoothGrad: removing noise by adding noise
D. Smilkov
Nikhil Thorat
Been Kim
F. Viégas
Martin Wattenberg
FAttODL
204
2,226
0
12 Jun 2017
A Unified Approach to Interpreting Model Predictions
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
1.1K
21,939
0
22 May 2017
Understanding Black-box Predictions via Influence Functions
Understanding Black-box Predictions via Influence Functions
Pang Wei Koh
Percy Liang
TDI
213
2,899
0
14 Mar 2017
Axiomatic Attribution for Deep Networks
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OODFAtt
188
6,015
0
04 Mar 2017
Mapping chemical performance on molecular structures using locally
  interpretable explanations
Mapping chemical performance on molecular structures using locally interpretable explanations
Leanne S. Whitmore
Anthe George
Corey M. Hudson
FAtt
39
12
0
22 Nov 2016
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based
  Localization
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
Ramprasaath R. Selvaraju
Michael Cogswell
Abhishek Das
Ramakrishna Vedantam
Devi Parikh
Dhruv Batra
FAtt
321
20,070
0
07 Oct 2016
Model-Agnostic Interpretability of Machine Learning
Model-Agnostic Interpretability of Machine Learning
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
86
838
0
16 Jun 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
1.2K
16,990
0
16 Feb 2016
The Bayesian Case Model: A Generative Approach for Case-Based Reasoning
  and Prototype Classification
The Bayesian Case Model: A Generative Approach for Case-Based Reasoning and Prototype Classification
Been Kim
Cynthia Rudin
J. Shah
70
321
0
03 Mar 2015
Very Deep Convolutional Networks for Large-Scale Image Recognition
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan
Andrew Zisserman
FAttMDE
1.7K
100,479
0
04 Sep 2014
Speeding up Convolutional Neural Networks with Low Rank Expansions
Speeding up Convolutional Neural Networks with Low Rank Expansions
Max Jaderberg
Andrea Vedaldi
Andrew Zisserman
130
1,465
0
15 May 2014
Exploiting Linear Structure Within Convolutional Networks for Efficient
  Evaluation
Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation
Emily L. Denton
Wojciech Zaremba
Joan Bruna
Yann LeCun
Rob Fergus
FAtt
177
1,692
0
02 Apr 2014
Deep Inside Convolutional Networks: Visualising Image Classification
  Models and Saliency Maps
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
Karen Simonyan
Andrea Vedaldi
Andrew Zisserman
FAtt
312
7,308
0
20 Dec 2013
Supersparse Linear Integer Models for Interpretable Classification
Supersparse Linear Integer Models for Interpretable Classification
Berk Ustun
Stefano Tracà
Cynthia Rudin
68
43
0
27 Jun 2013
1