Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1602.04938
Cited By
v1
v2
v3 (latest)
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
16 February 2016
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
""Why Should I Trust You?": Explaining the Predictions of Any Classifier"
50 / 4,971 papers shown
Title
Interpretability with Accurate Small Models
Abhishek Ghose
Balaraman Ravindran
110
1
0
04 May 2019
Visualizing Deep Networks by Optimizing with Integrated Gradients
Zhongang Qi
Saeed Khorram
Fuxin Li
FAtt
83
124
0
02 May 2019
Unrestricted Permutation forces Extrapolation: Variable Importance Requires at least One More Model, or There Is No Free Variable Importance
Giles Hooker
L. Mentch
Siyu Zhou
93
159
0
01 May 2019
Interpretable multiclass classification by MDL-based rule lists
Hugo Manuel Proença
M. Leeuwen
56
48
0
01 May 2019
To believe or not to believe: Validating explanation fidelity for dynamic malware analysis
Li-Wei Chen
Carter Yagemann
Evan Downing
AAML
FAtt
37
3
0
30 Apr 2019
Factor Analysis in Fault Diagnostics Using Random Forest
Nagdev Amruthnath
Tarun Gupta
14
10
0
30 Apr 2019
A scalable saliency-based Feature selection method with instance level information
Brais Cancela
V. Bolón-Canedo
Amparo Alonso-Betanzos
João Gama
FAtt
62
13
0
30 Apr 2019
Adversarial Training and Robustness for Multiple Perturbations
Florian Tramèr
Dan Boneh
AAML
SILM
99
380
0
30 Apr 2019
"Why Should You Trust My Explanation?" Understanding Uncertainty in LIME Explanations
Hui Fen Tan
Kuangyan Song
Yiming Sun
Yujia Zhang
Madeilene Udell
FAtt
115
19
0
29 Apr 2019
Evaluating Recurrent Neural Network Explanations
L. Arras
Ahmed Osman
K. Müller
Wojciech Samek
XAI
FAtt
117
88
0
26 Apr 2019
Applying machine learning to improve simulations of a chaotic dynamical system using empirical error correction
P. Watson
AI4Cl
AI4CE
64
65
0
24 Apr 2019
Concise Fuzzy System Modeling Integrating Soft Subspace Clustering and Sparse Learning
Peng Xu
Zhaohong Deng
Chen Cui
Te Zhang
K. Choi
Suhang Gu
Jun Wang
Shitong Wang
46
32
0
24 Apr 2019
Explaining a prediction in some nonlinear models
Cosimo Izzo
FAtt
19
0
0
21 Apr 2019
Explaining Deep Classification of Time-Series Data with Learned Prototypes
Alan H. Gee
Diego Garcia-Olano
Joydeep Ghosh
D. Paydarfar
AI4TS
103
67
0
18 Apr 2019
"Why did you do that?": Explaining black box models with Inductive Synthesis
Görkem Paçaci
David Johnson
S. McKeever
A. Hamfelt
35
6
0
17 Apr 2019
Explainability in Human-Agent Systems
A. Rosenfeld
A. Richardson
XAI
83
206
0
17 Apr 2019
HARK Side of Deep Learning -- From Grad Student Descent to Automated Machine Learning
O. Gencoglu
M. Gils
E. Guldogan
Chamin Morikawa
Mehmet Süzen
M. Gruber
J. Leinonen
H. Huttunen
98
36
0
16 Apr 2019
Counterfactual Visual Explanations
Yash Goyal
Ziyan Wu
Jan Ernst
Dhruv Batra
Devi Parikh
Stefan Lee
CML
95
511
0
16 Apr 2019
Enhancing Decision Tree based Interpretation of Deep Neural Networks through L1-Orthogonal Regularization
Nina Schaaf
Marco F. Huber
Johannes Maucher
110
36
0
10 Apr 2019
Enhancing Time Series Momentum Strategies Using Deep Neural Networks
Bryan Lim
S. Zohren
Stephen J. Roberts
AIFin
AI4TS
72
90
0
09 Apr 2019
Software and application patterns for explanation methods
Maximilian Alber
80
11
0
09 Apr 2019
Regression Concept Vectors for Bidirectional Explanations in Histopathology
Mara Graziani
Vincent Andrearczyk
Henning Muller
91
81
0
09 Apr 2019
Sampling, Intervention, Prediction, Aggregation: A Generalized Framework for Model-Agnostic Interpretations
Christian A. Scholbeck
Christoph Molnar
C. Heumann
B. Bischl
Giuseppe Casalicchio
100
27
0
08 Apr 2019
Quantifying Model Complexity via Functional Decomposition for Better Post-Hoc Interpretability
Christoph Molnar
Giuseppe Casalicchio
B. Bischl
FAtt
54
60
0
08 Apr 2019
Visualization of Convolutional Neural Networks for Monocular Depth Estimation
Junjie Hu
Yan Zhang
Takayuki Okatani
MDE
124
83
0
06 Apr 2019
A Categorisation of Post-hoc Explanations for Predictive Models
John Mitros
Brian Mac Namee
XAI
CML
28
1
0
04 Apr 2019
Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations
Fred Hohman
Haekyu Park
Caleb Robinson
Duen Horng Chau
FAtt
3DH
HAI
90
217
0
04 Apr 2019
Understanding the efficacy, reliability and resiliency of computer vision techniques for malware detection and future research directions
Li-Wei Chen
AAML
28
1
0
03 Apr 2019
Relative Attributing Propagation: Interpreting the Comparative Contributions of Individual Units in Deep Neural Networks
Woo-Jeoung Nam
Shir Gur
Jaesik Choi
Lior Wolf
Seong-Whan Lee
FAtt
80
99
0
01 Apr 2019
VINE: Visualizing Statistical Interactions in Black Box Models
M. Britton
FAtt
63
22
0
01 Apr 2019
Interpreting Black Box Models via Hypothesis Testing
Collin Burns
Jesse Thomason
Wesley Tansey
FAtt
65
9
0
29 Mar 2019
Do Not Trust Additive Explanations
Alicja Gosiewska
P. Biecek
73
42
0
27 Mar 2019
On Attribution of Recurrent Neural Network Predictions via Additive Decomposition
Mengnan Du
Ninghao Liu
Fan Yang
Shuiwang Ji
Helen Zhou
FAtt
71
51
0
27 Mar 2019
Explaining Deep Neural Networks with a Polynomial Time Algorithm for Shapley Values Approximation
Marco Ancona
Cengiz Öztireli
Markus Gross
FAtt
TDI
121
229
0
26 Mar 2019
Explaining individual predictions when features are dependent: More accurate approximations to Shapley values
K. Aas
Martin Jullum
Anders Løland
FAtt
TDI
90
633
0
25 Mar 2019
On the Robustness of Deep K-Nearest Neighbors
Chawin Sitawarin
David Wagner
AAML
OOD
140
58
0
20 Mar 2019
NeuralHydrology -- Interpreting LSTMs in Hydrology
Frederik Kratzert
M. Herrnegger
D. Klotz
Sepp Hochreiter
Günter Klambauer
60
86
0
19 Mar 2019
Natural Language Interaction with Explainable AI Models
Arjun Reddy Akula
S. Todorovic
J. Chai
Song-Chun Zhu
74
23
0
13 Mar 2019
GNNExplainer: Generating Explanations for Graph Neural Networks
Rex Ying
Dylan Bourgeois
Jiaxuan You
Marinka Zitnik
J. Leskovec
LLMAG
163
1,336
0
10 Mar 2019
Explaining Anomalies Detected by Autoencoders Using SHAP
Liat Antwarg
Ronnie Mindlin Miller
Bracha Shapira
Lior Rokach
FAtt
TDI
74
86
0
06 Mar 2019
Copying Machine Learning Classifiers
Irene Unceta
Jordi Nin
O. Pujol
96
18
0
05 Mar 2019
Deep learning in bioinformatics: introduction, application, and perspective in big data era
Yu Li
Chao Huang
Lizhong Ding
Zhongxiao Li
Yijie Pan
Xin Gao
AI4CE
96
302
0
28 Feb 2019
Reliable Deep Grade Prediction with Uncertainty Estimation
Qian Hu
Huzefa Rangwala
53
39
0
26 Feb 2019
Unmasking Clever Hans Predictors and Assessing What Machines Really Learn
Sebastian Lapuschkin
S. Wäldchen
Alexander Binder
G. Montavon
Wojciech Samek
K. Müller
106
1,022
0
26 Feb 2019
Functional Transparency for Structured Data: a Game-Theoretic Approach
Guang-He Lee
Wengong Jin
David Alvarez-Melis
Tommi Jaakkola
67
19
0
26 Feb 2019
Saliency Learning: Teaching the Model Where to Pay Attention
Reza Ghaeini
Xiaoli Z. Fern
Hamed Shahbazi
Prasad Tadepalli
FAtt
XAI
102
31
0
22 Feb 2019
Explaining a black-box using Deep Variational Information Bottleneck Approach
Seo-Jin Bang
P. Xie
Heewook Lee
Wei Wu
Eric Xing
XAI
FAtt
77
77
0
19 Feb 2019
Regularizing Black-box Models for Improved Interpretability
Gregory Plumb
Maruan Al-Shedivat
Ángel Alexander Cabrera
Adam Perer
Eric Xing
Ameet Talwalkar
AAML
95
80
0
18 Feb 2019
STRIP: A Defence Against Trojan Attacks on Deep Neural Networks
Yansong Gao
Chang Xu
Derui Wang
Shiping Chen
Damith C. Ranasinghe
Surya Nepal
AAML
84
819
0
18 Feb 2019
Significance Tests for Neural Networks
Enguerrand Horel
K. Giesecke
57
56
0
16 Feb 2019
Previous
1
2
3
...
91
92
93
...
98
99
100
Next