Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1602.04938
Cited By
v1
v2
v3 (latest)
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
16 February 2016
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
""Why Should I Trust You?": Explaining the Predictions of Any Classifier"
50 / 4,966 papers shown
Title
Learning Qualitatively Diverse and Interpretable Rules for Classification
A. Ross
Weiwei Pan
Finale Doshi-Velez
57
13
0
22 Jun 2018
Interpretable Discovery in Large Image Data Sets
K. Wagstaff
Jake H. Lee
32
9
0
21 Jun 2018
On the Robustness of Interpretability Methods
David Alvarez-Melis
Tommi Jaakkola
103
529
0
21 Jun 2018
Interpreting Embedding Models of Knowledge Bases: A Pedagogical Approach
A. C. Gusmão
Alvaro H. C. Correia
Glauber De Bona
Fabio Gagliardi Cozman
29
22
0
20 Jun 2018
Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems
Richard J. Tomsett
Dave Braines
Daniel Harborne
Alun D. Preece
Supriyo Chakraborty
FaML
143
166
0
20 Jun 2018
Towards Robust Interpretability with Self-Explaining Neural Networks
David Alvarez-Melis
Tommi Jaakkola
MILM
XAI
135
948
0
20 Jun 2018
DeepAffinity: Interpretable Deep Learning of Compound-Protein Affinity through Unified Recurrent and Convolutional Neural Networks
Mostafa Karimi
Di Wu
Zhangyang Wang
Yang Shen
89
364
0
20 Jun 2018
Contrastive Explanations with Local Foil Trees
J. V. D. Waa
M. Robeer
J. Diggelen
Matthieu J. S. Brinkhuis
Mark Antonius Neerincx
FAtt
79
82
0
19 Jun 2018
RISE: Randomized Input Sampling for Explanation of Black-box Models
Vitali Petsiuk
Abir Das
Kate Saenko
FAtt
215
1,177
0
19 Jun 2018
Instance-Level Explanations for Fraud Detection: A Case Study
Dennis Collaris
L. M. Vink
J. V. Wijk
79
31
0
19 Jun 2018
Deep Neural Decision Trees
Yongxin Yang
Irene Garcia Morillo
Timothy M. Hospedales
PINN
63
187
0
19 Jun 2018
Biased Embeddings from Wild Data: Measuring, Understanding and Removing
Adam Sutton
Thomas Lansdall-Welfare
N. Cristianini
63
23
0
16 Jun 2018
Right for the Right Reason: Training Agnostic Networks
Sen Jia
Thomas Lansdall-Welfare
N. Cristianini
FaML
58
26
0
16 Jun 2018
Binary Classification in Unstructured Space With Hypergraph Case-Based Reasoning
Alexandre Quemy
40
7
0
16 Jun 2018
Interactive Classification for Deep Learning Interpretation
Ángel Alexander Cabrera
Fred Hohman
Jason Lin
Duen Horng Chau
VLM
HAI
40
12
0
14 Jun 2018
Hierarchical interpretations for neural network predictions
Chandan Singh
W. James Murdoch
Bin Yu
77
146
0
14 Jun 2018
Understanding Patch-Based Learning by Explaining Predictions
Christopher J. Anders
G. Montavon
Wojciech Samek
K. Müller
UQCV
FAtt
62
6
0
11 Jun 2018
A New Framework for Machine Intelligence: Concepts and Prototype
Abel Torres Montoya
47
0
0
06 Jun 2018
Performance Metric Elicitation from Pairwise Classifier Comparisons
Gaurush Hiranandani
Shant Boodaghians
R. Mehta
Oluwasanmi Koyejo
64
14
0
05 Jun 2018
Locally Interpretable Models and Effects based on Supervised Partitioning (LIME-SUP)
Linwei Hu
Jie Chen
V. Nair
Agus Sudjianto
FAtt
69
64
0
02 Jun 2018
A Review of Challenges and Opportunities in Machine Learning for Health
Marzyeh Ghassemi
Tristan Naumann
Peter F. Schulam
Andrew L. Beam
Irene Y. Chen
Rajesh Ranganath
92
270
0
01 Jun 2018
Explaining Explanations: An Overview of Interpretability of Machine Learning
Leilani H. Gilpin
David Bau
Ben Z. Yuan
Ayesha Bajwa
Michael A. Specter
Lalana Kagal
XAI
121
1,868
0
31 May 2018
Teaching Meaningful Explanations
Noel Codella
Michael Hind
Karthikeyan N. Ramamurthy
Murray Campbell
Amit Dhurandhar
Kush R. Varshney
Dennis L. Wei
Aleksandra Mojsilović
FAtt
XAI
63
7
0
29 May 2018
Human-in-the-Loop Interpretability Prior
Isaac Lage
A. Ross
Been Kim
S. Gershman
Finale Doshi-Velez
89
121
0
29 May 2018
Lightly-supervised Representation Learning with Global Interpretability
M. A. Valenzuela-Escarcega
Ajay Nagesh
Mihai Surdeanu
SSL
67
23
0
29 May 2018
Local Rule-Based Explanations of Black Box Decision Systems
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
D. Pedreschi
Franco Turini
F. Giannotti
144
439
0
28 May 2018
RetainVis: Visual Analytics with Interpretable and Interactive Recurrent Neural Networks on Electronic Medical Records
Bum Chul Kwon
Min-Je Choi
J. Kim
Edward Choi
Young Bin Kim
Soonwook Kwon
Jimeng Sun
Jaegul Choo
78
252
0
28 May 2018
Semantic Explanations of Predictions
Freddy Lecue
Jiewen Wu
FAtt
26
11
0
27 May 2018
Personalized Influence Estimation Technique
Kumarjit Pathak
Jitin Kapila
Aasheesh Barvey
TDI
24
0
0
25 May 2018
Communication Algorithms via Deep Learning
Hyeji Kim
Yihan Jiang
Ranvir Rana
Sreeram Kannan
Sewoong Oh
Pramod Viswanath
58
220
0
23 May 2018
"Why Should I Trust Interactive Learners?" Explaining Interactive Queries of Classifiers to Users
Stefano Teso
Kristian Kersting
FAtt
HAI
52
12
0
22 May 2018
Unsupervised Learning of Neural Networks to Explain Neural Networks
Quanshi Zhang
Yu Yang
Yuchen Liu
Ying Nian Wu
Song-Chun Zhu
FAtt
SSL
61
27
0
18 May 2018
Defoiling Foiled Image Captions
Pranava Madhyastha
Josiah Wang
Lucia Specia
59
9
0
16 May 2018
Towards Explaining Anomalies: A Deep Taylor Decomposition of One-Class Models
Jacob R. Kauffmann
K. Müller
G. Montavon
DRL
74
97
0
16 May 2018
SoPa: Bridging CNNs, RNNs, and Weighted Finite-State Machines
Roy Schwartz
Sam Thomson
Noah A. Smith
74
24
0
15 May 2018
Did the Model Understand the Question?
Pramod Kaushik Mudrakarta
Ankur Taly
Mukund Sundararajan
Kedar Dhamdhere
ELM
OOD
FAtt
76
199
0
14 May 2018
Faithfully Explaining Rankings in a News Recommender System
Maartje ter Hoeve
Anne Schuth
Daan Odijk
Maarten de Rijke
OffRL
35
24
0
14 May 2018
State Gradients for RNN Memory Analysis
Lyan Verwimp
Hugo Van hamme
Vincent Renkens
P. Wambacq
39
6
0
11 May 2018
Behavior Analysis of NLI Models: Uncovering the Influence of Three Factors on Robustness
V. Carmona
Jeff Mitchell
Sebastian Riedel
83
44
0
11 May 2018
A Symbolic Approach to Explaining Bayesian Network Classifiers
Andy Shih
Arthur Choi
Adnan Darwiche
FAtt
86
243
0
09 May 2018
Explainable Recommendation: A Survey and New Perspectives
Yongfeng Zhang
Xu Chen
XAI
LRM
124
881
0
30 Apr 2018
Seq2Seq-Vis: A Visual Debugging Tool for Sequence-to-Sequence Models
Hendrik Strobelt
Sebastian Gehrmann
M. Behrisch
Adam Perer
Hanspeter Pfister
Alexander M. Rush
VLM
HAI
65
240
0
25 Apr 2018
A Nutritional Label for Rankings
Ke Yang
Julia Stoyanovich
Abolfazl Asudeh
Bill Howe
H. V. Jagadish
G. Miklau
71
108
0
21 Apr 2018
Pathologies of Neural Models Make Interpretations Difficult
Shi Feng
Eric Wallace
Alvin Grissom II
Mohit Iyyer
Pedro Rodriguez
Jordan L. Boyd-Graber
AAML
FAtt
97
322
0
20 Apr 2018
Toward Intelligent Autonomous Agents for Cyber Defense: Report of the 2017 Workshop by the North Atlantic Treaty Organization (NATO) Research Group IST-152-RTG
Alexander Kott
R. Thomas
Martin Drašar
Markus Kont
A. Poylisher
...
H. Harney
Gregory Wehner
A. Guarino
Jana Komárková
James Rowell
42
10
0
20 Apr 2018
Understanding Community Structure in Layered Neural Networks
C. Watanabe
Kaoru Hiramatsu
K. Kashino
135
22
0
13 Apr 2018
Visual Analytics for Explainable Deep Learning
Jaegul Choo
Shixia Liu
HAI
XAI
75
237
0
07 Apr 2018
Explanations of model predictions with live and breakDown packages
M. Staniak
P. Biecek
FAtt
61
118
0
05 Apr 2018
Enslaving the Algorithm: From a "Right to an Explanation" to a "Right to Better Decisions"?
L. Edwards
Michael Veale
FaML
AILaw
66
135
0
20 Mar 2018
Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges
Gabrielle Ras
Marcel van Gerven
W. Haselager
XAI
113
220
0
20 Mar 2018
Previous
1
2
3
...
100
95
96
97
98
99
Next