Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1602.04938
Cited By
v1
v2
v3 (latest)
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
16 February 2016
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
""Why Should I Trust You?": Explaining the Predictions of Any Classifier"
50 / 4,966 papers shown
Title
Visual Psychophysics for Making Face Recognition Algorithms More Explainable
Brandon RichardWebster
So Yon Kwon
Christopher Clarizio
Samuel E. Anthony
Walter J. Scheirer
CVBM
60
41
0
19 Mar 2018
Towards Explanation of DNN-based Prediction with Guided Feature Inversion
Mengnan Du
Ninghao Liu
Qingquan Song
Helen Zhou
FAtt
97
127
0
19 Mar 2018
Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning
Nicolas Papernot
Patrick McDaniel
OOD
AAML
156
508
0
13 Mar 2018
Interpreting Deep Classifier by Visual Distillation of Dark Knowledge
Kai Xu
Dae Hoon Park
Chang Yi
Charles Sutton
HAI
FAtt
67
26
0
11 Mar 2018
Explaining Black-box Android Malware Detection
Marco Melis
Davide Maiorca
Battista Biggio
Giorgio Giacinto
Fabio Roli
AAML
FAtt
49
44
0
09 Mar 2018
The Challenge of Crafting Intelligible Intelligence
Daniel S. Weld
Gagan Bansal
58
244
0
09 Mar 2018
Visual Explanations From Deep 3D Convolutional Neural Networks for Alzheimer's Disease Classification
Chengliang Yang
Anand Rangarajan
Sanjay Ranka
FAtt
63
142
0
07 Mar 2018
Explain Yourself: A Natural Language Interface for Scrutable Autonomous Robots
Javier Chiyah-Garcia
D. A. Robb
Xingkun Liu
A. Laskov
P. Patrón
H. Hastie
71
25
0
06 Mar 2018
On Cognitive Preferences and the Plausibility of Rule-based Models
Johannes Furnkranz
Tomáš Kliegr
Heiko Paulheim
LRM
73
71
0
04 Mar 2018
Improved Explainability of Capsule Networks: Relevance Path by Agreement
Atefeh Shahroudnejad
Arash Mohammadi
Konstantinos N. Plataniotis
AAML
MedIm
50
62
0
27 Feb 2018
Interpreting Complex Regression Models
Noa Avigdor-Elgrabli
Alex Libov
M. Viderman
R. Wolff
37
1
0
26 Feb 2018
Interpretable Charge Predictions for Criminal Cases: Learning to Generate Court Views from Fact Descriptions
Hai Ye
Xin Jiang
Zhunchen Luo
Wen-Han Chao
80
132
0
23 Feb 2018
Learning to Explain: An Information-Theoretic Perspective on Model Interpretation
Jianbo Chen
Le Song
Martin J. Wainwright
Michael I. Jordan
MLT
FAtt
184
576
0
21 Feb 2018
Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives
Amit Dhurandhar
Pin-Yu Chen
Ronny Luss
Chun-Chen Tu
Pai-Shun Ting
Karthikeyan Shanmugam
Payel Das
FAtt
140
592
0
21 Feb 2018
Interpreting Neural Network Judgments via Minimal, Stable, and Symbolic Corrections
Xin Zhang
Armando Solar-Lezama
Rishabh Singh
FAtt
115
63
0
21 Feb 2018
Teaching Categories to Human Learners with Visual Explanations
Oisin Mac Aodha
Shihan Su
Yuxin Chen
Pietro Perona
Yisong Yue
135
71
0
20 Feb 2018
Finding Influential Training Samples for Gradient Boosted Decision Trees
B. Sharchilev
Yury Ustinovsky
P. Serdyukov
Maarten de Rijke
TDI
74
57
0
19 Feb 2018
Exact and Consistent Interpretation for Piecewise Linear Neural Networks: A Closed Form Solution
Lingyang Chu
X. Hu
Juhua Hu
Lanjun Wang
J. Pei
62
100
0
17 Feb 2018
Explainable Prediction of Medical Codes from Clinical Text
J. Mullenbach
Sarah Wiegreffe
J. Duke
Jimeng Sun
Jacob Eisenstein
FAtt
95
576
0
15 Feb 2018
Gaining Free or Low-Cost Transparency with Interpretable Partial Substitute
Tong Wang
50
8
0
12 Feb 2018
Evaluating Compositionality in Sentence Embeddings
Ishita Dasgupta
Demi Guo
Andreas Stuhlmuller
S. Gershman
Noah D. Goodman
CoGe
98
121
0
12 Feb 2018
Consistent Individualized Feature Attribution for Tree Ensembles
Scott M. Lundberg
G. Erion
Su-In Lee
FAtt
TDI
86
1,408
0
12 Feb 2018
Influence-Directed Explanations for Deep Convolutional Networks
Klas Leino
S. Sen
Anupam Datta
Matt Fredrikson
Linyi Li
TDI
FAtt
113
75
0
11 Feb 2018
Global Model Interpretation via Recursive Partitioning
Chengliang Yang
Anand Rangarajan
Sanjay Ranka
FAtt
62
80
0
11 Feb 2018
WorldTree: A Corpus of Explanation Graphs for Elementary Science Questions supporting Multi-Hop Inference
Peter Alexander Jansen
Elizabeth Wainwright
Steven Marmorstein
Clayton T. Morrison
89
123
0
08 Feb 2018
Granger-causal Attentive Mixtures of Experts: Learning Important Features with Neural Networks
Patrick Schwab
Djordje Miladinovic
W. Karlen
CML
99
57
0
06 Feb 2018
A Survey Of Methods For Explaining Black Box Models
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
Franco Turini
D. Pedreschi
F. Giannotti
XAI
179
3,996
0
06 Feb 2018
Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making
Michael Veale
Max Van Kleek
Reuben Binns
71
424
0
03 Feb 2018
How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation
Menaka Narayanan
Emily Chen
Jeffrey He
Been Kim
S. Gershman
Finale Doshi-Velez
FAtt
XAI
110
243
0
02 Feb 2018
Visual Interpretability for Deep Learning: a Survey
Quanshi Zhang
Song-Chun Zhu
FaML
HAI
179
822
0
02 Feb 2018
Interpretable Deep Convolutional Neural Networks via Meta-learning
Xuan Liu
Xiaoguang Wang
Stan Matwin
FaML
154
38
0
02 Feb 2018
Causal Learning and Explanation of Deep Neural Networks via Autoencoded Activations
M. Harradon
Jeff Druce
Brian E. Ruttenberg
BDL
CML
53
82
0
02 Feb 2018
Interpreting CNNs via Decision Trees
Quanshi Zhang
Yu Yang
Ying Nian Wu
Song-Chun Zhu
FAtt
88
323
0
01 Feb 2018
Ít's Reducing a Human Being to a Percentage'; Perceptions of Justice in Algorithmic Decisions
Reuben Binns
Max Van Kleek
Michael Veale
Ulrik Lyngs
Jun Zhao
N. Shadbolt
FaML
77
546
0
31 Jan 2018
The Intriguing Properties of Model Explanations
Maruan Al-Shedivat
Kumar Avinava Dubey
Eric Xing
FAtt
41
7
0
30 Jan 2018
Visual Analytics in Deep Learning: An Interrogative Survey for the Next Frontiers
Fred Hohman
Minsuk Kahng
Robert S. Pienta
Duen Horng Chau
OOD
HAI
101
541
0
21 Jan 2018
Evaluating neural network explanation methods using hybrid documents and morphological agreement
Nina Pörner
Benjamin Roth
Hinrich Schütze
60
9
0
19 Jan 2018
A Comparative Study of Rule Extraction for Recurrent Neural Networks
Qinglong Wang
Kaixuan Zhang
Alexander Ororbia
Masashi Sugiyama
Xue Liu
C. Lee Giles
80
11
0
16 Jan 2018
A Human-Grounded Evaluation Benchmark for Local Explanations of Machine Learning
Sina Mohseni
Jeremy E. Block
Eric D. Ragan
FAtt
XAI
88
61
0
16 Jan 2018
Deep Learning: A Critical Appraisal
G. Marcus
HAI
VLM
149
1,043
0
02 Jan 2018
What do we need to build explainable AI systems for the medical domain?
Andreas Holzinger
Chris Biemann
C. Pattichis
D. Kell
79
694
0
28 Dec 2017
The Robust Manifold Defense: Adversarial Training using Generative Models
A. Jalal
Andrew Ilyas
C. Daskalakis
A. Dimakis
AAML
109
174
0
26 Dec 2017
Dropout Feature Ranking for Deep Learning Models
C. Chang
Ladislav Rampášek
Anna Goldenberg
OOD
71
50
0
22 Dec 2017
Towards the Augmented Pathologist: Challenges of Explainable-AI in Digital Pathology
Andreas Holzinger
Bernd Malle
Peter Kieseberg
P. Roth
Heimo Muller
Robert Reihs
K. Zatloukal
43
91
0
18 Dec 2017
A trans-disciplinary review of deep learning research for water resources scientists
Chaopeng Shen
AI4CE
226
699
0
06 Dec 2017
SMILES2Vec: An Interpretable General-Purpose Deep Neural Network for Predicting Chemical Properties
Garrett B. Goh
Nathan Oken Hodas
Charles Siegel
Abhinav Vishnu
69
144
0
06 Dec 2017
Explainable AI: Beware of Inmates Running the Asylum Or: How I Learnt to Stop Worrying and Love the Social and Behavioural Sciences
Tim Miller
Piers Howe
L. Sonenberg
AI4TS
SyDa
86
374
0
02 Dec 2017
ConvNets and ImageNet Beyond Accuracy: Understanding Mistakes and Uncovering Biases
Pierre Stock
Moustapha Cissé
FaML
92
46
0
30 Nov 2017
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
FAtt
270
1,848
0
30 Nov 2017
Contextual Outlier Interpretation
Ninghao Liu
DongHwa Shin
Helen Zhou
60
72
0
28 Nov 2017
Previous
1
2
3
...
100
96
97
98
99
Next