Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1602.04938
Cited By
v1
v2
v3 (latest)
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
16 February 2016
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
""Why Should I Trust You?": Explaining the Predictions of Any Classifier"
50 / 4,971 papers shown
Title
DLIME: A Deterministic Local Interpretable Model-Agnostic Explanations Approach for Computer-Aided Diagnosis Systems
Muhammad Rehman Zafar
N. Khan
FAtt
124
159
0
24 Jun 2019
Generating Counterfactual and Contrastive Explanations using SHAP
Shubham Rathi
77
57
0
21 Jun 2019
Machine Learning Testing: Survey, Landscapes and Horizons
Jie M. Zhang
Mark Harman
Lei Ma
Yang Liu
VLM
AILaw
98
756
0
19 Jun 2019
Trepan Reloaded: A Knowledge-driven Approach to Explaining Artificial Neural Networks
R. Confalonieri
Tillman Weyde
Tarek R. Besold
Fermín Moscoso del Prado Martín
58
24
0
19 Jun 2019
Incorporating Priors with Feature Attribution on Text Classification
Frederick Liu
Besim Avci
FAtt
FaML
108
120
0
19 Jun 2019
Explanations can be manipulated and geometry is to blame
Ann-Kathrin Dombrowski
Maximilian Alber
Christopher J. Anders
M. Ackermann
K. Müller
Pan Kessel
AAML
FAtt
88
335
0
19 Jun 2019
VizADS-B: Analyzing Sequences of ADS-B Images Using Explainable Convolutional LSTM Encoder-Decoder to Detect Cyber Attacks
Sefi Akerman
Edan Habler
A. Shabtai
82
18
0
19 Jun 2019
From Clustering to Cluster Explanations via Neural Networks
Jacob R. Kauffmann
Malte Esders
Lukas Ruff
G. Montavon
Wojciech Samek
K. Müller
79
72
0
18 Jun 2019
Exact and Consistent Interpretation of Piecewise Linear Models Hidden behind APIs: A Closed Form Solution
Zicun Cong
Lingyang Chu
Lanjun Wang
X. Hu
J. Pei
420
5
0
17 Jun 2019
ASAC: Active Sensing using Actor-Critic models
Jinsung Yoon
James Jordon
M. Schaar
CML
59
16
0
16 Jun 2019
MoËT: Mixture of Expert Trees and its Application to Verifiable Reinforcement Learning
Marko Vasic
Andrija Petrović
Kaiyuan Wang
Mladen Nikolic
Rishabh Singh
S. Khurshid
OffRL
MoE
96
25
0
16 Jun 2019
Yoga-Veganism: Correlation Mining of Twitter Health Data
Tunazzina Islam
33
23
0
15 Jun 2019
Understanding artificial intelligence ethics and safety
David Leslie
FaML
AI4TS
74
363
0
11 Jun 2019
Toward Best Practices for Explainable B2B Machine Learning
Kit Kuksenok
13
0
0
11 Jun 2019
Extracting Interpretable Concept-Based Decision Trees from CNNs
Conner Chyung
Michael Tsang
Yan Liu
FAtt
41
8
0
11 Jun 2019
Quantification and Analysis of Layer-wise and Pixel-wise Information Discarding
Haotian Ma
Hao Zhang
Fan Zhou
Yinqing Zhang
Quanshi Zhang
FAtt
23
0
0
10 Jun 2019
Is Attention Interpretable?
Sofia Serrano
Noah A. Smith
112
687
0
09 Jun 2019
Proposed Guidelines for the Responsible Use of Explainable Machine Learning
Patrick Hall
Navdeep Gill
N. Schmidt
SILM
XAI
FaML
77
29
0
08 Jun 2019
ML-LOO: Detecting Adversarial Examples with Feature Attribution
Puyudi Yang
Jianbo Chen
Cho-Jui Hsieh
Jane-ling Wang
Michael I. Jordan
AAML
93
101
0
08 Jun 2019
Adversarial Explanations for Understanding Image Classification Decisions and Improved Neural Network Robustness
Walt Woods
Jack H Chen
C. Teuscher
AAML
66
46
0
07 Jun 2019
XRAI: Better Attributions Through Regions
A. Kapishnikov
Tolga Bolukbasi
Fernanda Viégas
Michael Terry
FAtt
XAI
74
213
0
06 Jun 2019
Survey on Publicly Available Sinhala Natural Language Processing Tools and Research
Nisansa de Silva
224
45
0
05 Jun 2019
Evaluating Explanation Methods for Deep Learning in Security
Alexander Warnecke
Dan Arp
Christian Wressnegger
Konrad Rieck
XAI
AAML
FAtt
71
94
0
05 Jun 2019
c-Eval: A Unified Metric to Evaluate Feature-based Explanations via Perturbation
Minh Nhat Vu
Truc D. T. Nguyen
Nhathai Phan
Ralucca Gera
My T. Thai
AAML
FAtt
77
22
0
05 Jun 2019
Interpretable and Differentially Private Predictions
Frederik Harder
Matthias Bauer
Mijung Park
FAtt
71
53
0
05 Jun 2019
A Just and Comprehensive Strategy for Using NLP to Address Online Abuse
David Jurgens
Eshwar Chandrasekharan
Libby Hemphill
85
138
0
04 Jun 2019
Learning Interpretable Shapelets for Time Series Classification through Adversarial Regularization
Yichang Wang
Rémi Emonet
Elisa Fromont
S. Malinowski
Etienne Ménager
Loic Mosser
R. Tavenard
AI4TS
43
12
0
03 Jun 2019
Model Agnostic Contrastive Explanations for Structured Data
Amit Dhurandhar
Tejaswini Pedapati
Avinash Balakrishnan
Pin-Yu Chen
Karthikeyan Shanmugam
Ruchi Puri
FAtt
88
83
0
31 May 2019
Do Human Rationales Improve Machine Explanations?
Julia Strout
Ye Zhang
Raymond J. Mooney
84
58
0
31 May 2019
Explainability Techniques for Graph Convolutional Networks
Federico Baldassarre
Hossein Azizpour
GNN
FAtt
178
272
0
31 May 2019
Leveraging Latent Features for Local Explanations
Ronny Luss
Pin-Yu Chen
Amit Dhurandhar
P. Sattigeri
Yunfeng Zhang
Karthikeyan Shanmugam
Chun-Chen Tu
FAtt
115
38
0
29 May 2019
Learning Representations by Humans, for Humans
Sophie Hilgard
Nir Rosenfeld
M. Banaji
Jack Cao
David C. Parkes
OCL
HAI
AI4CE
103
29
0
29 May 2019
Generation of Policy-Level Explanations for Reinforcement Learning
Nicholay Topin
Manuela Veloso
75
75
0
28 May 2019
Adversarial Robustness Guarantees for Classification with Gaussian Processes
Arno Blaas
A. Patané
Luca Laurenti
L. Cardelli
Marta Z. Kwiatkowska
Stephen J. Roberts
GP
AAML
89
21
0
28 May 2019
EDUCE: Explaining model Decisions through Unsupervised Concepts Extraction
Diane Bouchacourt
Ludovic Denoyer
FAtt
74
21
0
28 May 2019
Analyzing the Interpretability Robustness of Self-Explaining Models
Haizhong Zheng
Earlence Fernandes
A. Prakash
AAML
LRM
76
7
0
27 May 2019
Infusing domain knowledge in AI-based "black box" models for better explainability with application in bankruptcy prediction
Sheikh Rabiul Islam
W. Eberle
Sid Bundy
S. Ghafoor
MLAU
69
23
0
27 May 2019
Interpretable Neural Predictions with Differentiable Binary Variables
Jasmijn Bastings
Wilker Aziz
Ivan Titov
89
215
0
20 May 2019
The Twin-System Approach as One Generic Solution for XAI: An Overview of ANN-CBR Twins for Explaining Deep Learning
Mark T. Keane
Eoin M. Kenny
81
13
0
20 May 2019
Explaining Machine Learning Classifiers through Diverse Counterfactual Explanations
R. Mothilal
Amit Sharma
Chenhao Tan
CML
140
1,032
0
19 May 2019
Disentangled Attribution Curves for Interpreting Random Forests and Boosted Trees
Summer Devlin
Chandan Singh
W. James Murdoch
Bin Yu
FAtt
54
14
0
18 May 2019
How Case Based Reasoning Explained Neural Networks: An XAI Survey of Post-Hoc Explanation-by-Example in ANN-CBR Twins
Mark T. Keane
Eoin M. Kenny
125
81
0
17 May 2019
From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices
Jessica Morley
Luciano Floridi
Libby Kinsey
Anat Elhalal
83
57
0
15 May 2019
Modelling urban networks using Variational Autoencoders
Kira Kempinska
R. Murcio
GNN
42
39
0
14 May 2019
What Clinicians Want: Contextualizing Explainable Machine Learning for Clinical End Use
S. Tonekaboni
Shalmali Joshi
M. Mccradden
Anna Goldenberg
97
400
0
13 May 2019
Explainable AI for Trees: From Local Explanations to Global Understanding
Scott M. Lundberg
G. Erion
Hugh Chen
A. DeGrave
J. Prutkin
B. Nair
R. Katz
J. Himmelfarb
N. Bansal
Su-In Lee
FAtt
106
291
0
11 May 2019
Interpret Federated Learning with Shapley Values
Guan Wang
FAtt
FedML
71
92
0
11 May 2019
Hybrid Predictive Model: When an Interpretable Model Collaborates with a Black-box Model
Tong Wang
Qihang Lin
139
19
0
10 May 2019
Assuring the Machine Learning Lifecycle: Desiderata, Methods, and Challenges
Rob Ashmore
R. Calinescu
Colin Paterson
AI4TS
73
119
0
10 May 2019
Embedding Human Knowledge into Deep Neural Network via Attention Map
Masahiro Mitsuhara
Hiroshi Fukui
Yusuke Sakashita
Takanori Ogata
Tsubasa Hirakawa
Takayoshi Yamashita
H. Fujiyoshi
95
73
0
09 May 2019
Previous
1
2
3
...
90
91
92
...
98
99
100
Next