Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1602.04938
Cited By
v1
v2
v3 (latest)
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
16 February 2016
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
""Why Should I Trust You?": Explaining the Predictions of Any Classifier"
50 / 4,966 papers shown
Title
Distilling a Neural Network Into a Soft Decision Tree
Nicholas Frosst
Geoffrey E. Hinton
439
639
0
27 Nov 2017
Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients
A. Ross
Finale Doshi-Velez
AAML
159
686
0
26 Nov 2017
The Promise and Peril of Human Evaluation for Model Interpretability
Bernease Herman
74
144
0
20 Nov 2017
How the Experts Do It: Assessing and Explaining Agent Behaviors in Real-Time Strategy Games
Jonathan Dodge
Sean Penney
Claudia Hilderbrand
Andrew Anderson
Margaret Burnett
39
34
0
19 Nov 2017
Excitation Backprop for RNNs
Sarah Adel Bargal
Andrea Zunino
Donghyun Kim
Jianming Zhang
Vittorio Murino
Stan Sclaroff
166
48
0
18 Nov 2017
Improving Palliative Care with Deep Learning
Anand Avati
Kenneth Jung
S. Harman
L. Downing
A. Ng
N. Shah
146
375
0
17 Nov 2017
Beyond Sparsity: Tree Regularization of Deep Models for Interpretability
Mike Wu
M. C. Hughes
S. Parbhoo
Maurizio Zazzi
Volker Roth
Finale Doshi-Velez
AI4CE
143
283
0
16 Nov 2017
MARGIN: Uncovering Deep Neural Networks using Graph Signal Analysis
Rushil Anirudh
Jayaraman J. Thiagarajan
R. Sridhar
T. Bremer
FAtt
AAML
58
12
0
15 Nov 2017
Towards Interpretable R-CNN by Unfolding Latent Structures
Tianfu Wu
Wei Sun
Xilai Li
Xi Song
Yangqiu Song
ObjD
62
20
0
14 Nov 2017
Dynamic Analysis of Executables to Detect and Characterize Malware
Michael R. Smith
J. Ingram
Christopher C. Lamb
T. Draelos
J. Doak
J. Aimone
C. James
42
13
0
10 Nov 2017
Learning Credible Models
Jiaxuan Wang
Jeeheh Oh
Haozhu Wang
Jenna Wiens
FaML
87
30
0
08 Nov 2017
"Dave...I can assure you...that it's going to be all right..." -- A definition, case for, and survey of algorithmic assurances in human-autonomy trust relationships
Brett W. Israelsen
Nisar R. Ahmed
70
86
0
08 Nov 2017
Distributed Bayesian Piecewise Sparse Linear Models
M. Asahara
R. Fujimaki
19
0
0
07 Nov 2017
Visualizing and Understanding Atari Agents
S. Greydanus
Anurag Koul
Jonathan Dodge
Alan Fern
FAtt
133
348
0
31 Oct 2017
Grad-CAM++: Improved Visual Explanations for Deep Convolutional Networks
Aditya Chattopadhyay
Anirban Sarkar
Prantik Howlader
V. Balasubramanian
FAtt
144
2,319
0
30 Oct 2017
Understanding Hidden Memories of Recurrent Neural Networks
Yao Ming
Shaozu Cao
Ruixiang Zhang
Zerui Li
Yuanzhe Chen
Yangqiu Song
Huamin Qu
HAI
48
201
0
30 Oct 2017
Examining CNN Representations with respect to Dataset Bias
Quanshi Zhang
Wenguan Wang
Song-Chun Zhu
SSL
FAtt
61
104
0
29 Oct 2017
Do Convolutional Neural Networks Learn Class Hierarchy?
B. Alsallakh
Amin Jourabloo
Mao Ye
Xiaoming Liu
Liu Ren
186
215
0
17 Oct 2017
Interpretable Convolutional Neural Networks
Quanshi Zhang
Ying Nian Wu
Song-Chun Zhu
FAtt
100
784
0
02 Oct 2017
Statistical Parametric Speech Synthesis Incorporating Generative Adversarial Networks
Yuki Saito
Shinnosuke Takamichi
Hiroshi Saruwatari
76
199
0
23 Sep 2017
Practical Machine Learning for Cloud Intrusion Detection: Challenges and the Way Forward
Ramnath Kumar
Andrew W. Wicker
Matt Swann
AAML
43
43
0
20 Sep 2017
Human Understandable Explanation Extraction for Black-box Classification Models Based on Matrix Factorization
Jaedeok Kim
Ji-Hoon Seo
FAtt
101
8
0
18 Sep 2017
Embedding Deep Networks into Visual Explanations
Zhongang Qi
Saeed Khorram
Fuxin Li
41
27
0
15 Sep 2017
Learning Functional Causal Models with Generative Neural Networks
Hugo Jair Escalante
Sergio Escalera
Xavier Baro
Isabelle M Guyon
Umut Güçlü
Marcel van Gerven
CML
BDL
105
108
0
15 Sep 2017
Interpreting Shared Deep Learning Models via Explicable Boundary Trees
Huijun Wu
Chen Wang
Jie Yin
Kai Lu
Liming Zhu
FedML
36
5
0
12 Sep 2017
Opening the Black Box of Financial AI with CLEAR-Trade: A CLass-Enhanced Attentive Response Approach for Explaining and Visualizing Deep Learning-Driven Stock Market Prediction
Devinder Kumar
Graham W. Taylor
Alexander Wong
AIFin
50
18
0
05 Sep 2017
Learning the PE Header, Malware Detection with Minimal Domain Knowledge
Edward Raff
Jared Sylvester
Charles K. Nicholas
83
119
0
05 Sep 2017
Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models
Wojciech Samek
Thomas Wiegand
K. Müller
XAI
VLM
95
1,195
0
28 Aug 2017
Understanding and Comparing Deep Neural Networks for Age and Gender Classification
Sebastian Lapuschkin
Alexander Binder
K. Müller
Wojciech Samek
CVBM
94
135
0
25 Aug 2017
Explaining Anomalies in Groups with Characterizing Subspace Rules
Meghanath Macha
Leman Akoglu
43
39
0
20 Aug 2017
Early Stage Malware Prediction Using Recurrent Neural Networks
Matilda Rhode
Pete Burnap
K. Jones
AAML
72
255
0
11 Aug 2017
Data-driven Advice for Applying Machine Learning to Bioinformatics Problems
Randal S. Olson
William La Cava
Zairah Mustahsan
Akshay Varik
J. Moore
OOD
69
266
0
08 Aug 2017
Axiomatic Characterization of Data-Driven Influence Measures for Classification
Jakub Sliwinski
Martin Strobel
Yair Zick
TDI
62
14
0
07 Aug 2017
Machine learning for neural decoding
Joshua I. Glaser
Ari S. Benjamin
Raeed H. Chowdhury
M. Perich
L. Miller
Konrad Paul Kording
107
248
0
02 Aug 2017
Interpretable Active Learning
R. L. Phillips
K. H. Chang
Sorelle A. Friedler
FAtt
52
28
0
31 Jul 2017
Analysis and Optimization of Convolutional Neural Network Architectures
Martin Thoma
99
73
0
31 Jul 2017
Using Program Induction to Interpret Transition System Dynamics
Svetlin Penkov
S. Ramamoorthy
AI4CE
66
11
0
26 Jul 2017
Weakly Submodular Maximization Beyond Cardinality Constraints: Does Randomization Help Greedy?
Lin Chen
Moran Feldman
Amin Karbasi
77
47
0
13 Jul 2017
A Formal Framework to Characterize Interpretability of Procedures
Amit Dhurandhar
Vijay Iyengar
Ronny Luss
Karthikeyan Shanmugam
47
19
0
12 Jul 2017
Efficient mixture model for clustering of sparse high dimensional binary data
Marek Śmieja
Krzysztof Hajto
Jacek Tabor
27
15
0
11 Jul 2017
A causal framework for explaining the predictions of black-box sequence-to-sequence models
David Alvarez-Melis
Tommi Jaakkola
CML
368
205
0
06 Jul 2017
Efficient Data Representation by Selecting Prototypes with Importance Weights
Karthik S. Gurumoorthy
Amit Dhurandhar
Guillermo Cecchi
Charu Aggarwal
97
22
0
05 Jul 2017
Interpretable & Explorable Approximations of Black Box Models
Himabindu Lakkaraju
Ece Kamar
R. Caruana
J. Leskovec
FAtt
95
254
0
04 Jul 2017
Interpretability via Model Extraction
Osbert Bastani
Carolyn Kim
Hamsa Bastani
FAtt
78
129
0
29 Jun 2017
Methods for Interpreting and Understanding Deep Neural Networks
G. Montavon
Wojciech Samek
K. Müller
FaML
296
2,275
0
24 Jun 2017
Explanation in Artificial Intelligence: Insights from the Social Sciences
Tim Miller
XAI
264
4,293
0
22 Jun 2017
Explaining Recurrent Neural Network Predictions in Sentiment Analysis
L. Arras
G. Montavon
K. Müller
Wojciech Samek
FAtt
110
354
0
22 Jun 2017
MAGIX: Model Agnostic Globally Interpretable Explanations
Nikaash Puri
Piyush B. Gupta
Pratiksha Agarwal
Sukriti Verma
Balaji Krishnamurthy
FAtt
111
41
0
22 Jun 2017
Interpretable Predictions of Tree-based Ensembles via Actionable Feature Tweaking
Gabriele Tolomei
Fabrizio Silvestri
Andrew Haines
M. Lalmas
77
209
0
20 Jun 2017
Chemception: A Deep Neural Network with Minimal Chemistry Knowledge Matches the Performance of Expert-developed QSAR/QSPR Models
Garrett B. Goh
Charles Siegel
Abhinav Vishnu
Nathan Oken Hodas
Nathan Baker
103
158
0
20 Jun 2017
Previous
1
2
3
...
100
97
98
99
Next