Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1602.04938
Cited By
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
16 February 2016
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
Re-assign community
ArXiv
PDF
HTML
Papers citing
""Why Should I Trust You?": Explaining the Predictions of Any Classifier"
50 / 4,253 papers shown
Title
A Blended Deep Learning Approach for Predicting User Intended Actions
Fei Tan
Zhi Wei
Jun He
Xiang Wu
Bo Peng
Haoran Liu
Zhenyu Yan
19
16
0
11 Oct 2018
What made you do this? Understanding black-box decisions with sufficient input subsets
Brandon Carter
Jonas W. Mueller
Siddhartha Jain
David K Gifford
FAtt
37
77
0
09 Oct 2018
Understanding the Origins of Bias in Word Embeddings
Marc-Etienne Brunet
Colleen Alkalay-Houlihan
Ashton Anderson
R. Zemel
FaML
26
198
0
08 Oct 2018
Local Explanation Methods for Deep Neural Networks Lack Sensitivity to Parameter Values
Julius Adebayo
Justin Gilmer
Ian Goodfellow
Been Kim
FAtt
AAML
19
128
0
08 Oct 2018
Sanity Checks for Saliency Maps
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
FAtt
AAML
XAI
64
1,931
0
08 Oct 2018
On the Art and Science of Machine Learning Explanations
Patrick Hall
FAtt
XAI
28
30
0
05 Oct 2018
Projective Inference in High-dimensional Problems: Prediction and Feature Selection
Juho Piironen
Markus Paasiniemi
Aki Vehtari
30
94
0
04 Oct 2018
Interpreting Layered Neural Networks via Hierarchical Modular Representation
C. Watanabe
21
19
0
03 Oct 2018
Explainable Black-Box Attacks Against Model-based Authentication
Washington Garcia
Joseph I. Choi
S. K. Adari
S. Jha
Kevin R. B. Butler
26
10
0
28 Sep 2018
A User-based Visual Analytics Workflow for Exploratory Model Analysis
Dylan Cashman
S. Humayoun
Florian Heimerl
Kendall Park
Subhajit Das
...
Abigail Mosca
J. Stasko
Alex Endert
Michael Gleicher
Remco Chang
21
41
0
27 Sep 2018
Actionable Recourse in Linear Classification
Berk Ustun
Alexander Spangher
Yang Liu
FaML
28
539
0
18 Sep 2018
Object-sensitive Deep Reinforcement Learning
Yuezhang Li
Katia P. Sycara
R. Iyer
18
65
0
17 Sep 2018
Transparency and Explanation in Deep Reinforcement Learning Neural Networks
R. Iyer
Yuezhang Li
Huao Li
M. Lewis
R. Sundar
Katia P. Sycara
23
172
0
17 Sep 2018
Fair lending needs explainable models for responsible recommendation
Jiahao Chen
FaML
SILM
8
27
0
12 Sep 2018
Assessing Composition in Sentence Vector Representations
Allyson Ettinger
Ahmed Elgohary
C. Phillips
Philip Resnik
CoGe
20
78
0
11 Sep 2018
Automated Test Generation to Detect Individual Discrimination in AI Models
Aniya Aggarwal
P. Lohia
Seema Nagar
Kuntal Dey
Diptikalyan Saha
15
40
0
10 Sep 2018
Interpreting Neural Networks With Nearest Neighbors
Eric Wallace
Shi Feng
Jordan L. Boyd-Graber
AAML
FAtt
MILM
15
53
0
08 Sep 2018
Faithful Multimodal Explanation for Visual Question Answering
Jialin Wu
Raymond J. Mooney
20
90
0
08 Sep 2018
DeepPINK: reproducible feature selection in deep neural networks
Yang Young Lu
Yingying Fan
Jinchi Lv
William Stafford Noble
FAtt
27
124
0
04 Sep 2018
Extractive Adversarial Networks: High-Recall Explanations for Identifying Personal Attacks in Social Media Posts
Samuel Carton
Qiaozhu Mei
Paul Resnick
FAtt
AAML
21
34
0
01 Sep 2018
An Operation Sequence Model for Explainable Neural Machine Translation
Felix Stahlberg
Danielle Saunders
Bill Byrne
LRM
MILM
40
29
0
29 Aug 2018
Targeted Nonlinear Adversarial Perturbations in Images and Videos
R. Rey-de-Castro
H. Rabitz
AAML
16
10
0
27 Aug 2018
Unknown Examples & Machine Learning Model Generalization
Yeounoh Chung
P. Haas
E. Upfal
Tim Kraska
OOD
21
32
0
24 Aug 2018
Shedding Light on Black Box Machine Learning Algorithms: Development of an Axiomatic Framework to Assess the Quality of Methods that Explain Individual Predictions
Milo Honegger
17
35
0
15 Aug 2018
Explaining the Unique Nature of Individual Gait Patterns with Deep Learning
Fabian Horst
Sebastian Lapuschkin
Wojciech Samek
K. Müller
W. Schöllhorn
AI4CE
31
207
0
13 Aug 2018
Text Classification using Capsules
Jaeyoung Kim
Sion Jang
Sungchul Choi
Eunjeong Lucy Park
17
160
0
12 Aug 2018
L-Shapley and C-Shapley: Efficient Model Interpretation for Structured Data
Jianbo Chen
Le Song
Martin J. Wainwright
Michael I. Jordan
FAtt
TDI
14
213
0
08 Aug 2018
Using Machine Learning Safely in Automotive Software: An Assessment and Adaption of Software Process Requirements in ISO 26262
Rick Salay
Krzysztof Czarnecki
25
69
0
05 Aug 2018
Techniques for Interpretable Machine Learning
Mengnan Du
Ninghao Liu
Xia Hu
FaML
39
1,072
0
31 Jul 2018
Grounding Visual Explanations
Lisa Anne Hendricks
Ronghang Hu
Trevor Darrell
Zeynep Akata
FAtt
17
225
0
25 Jul 2018
Contrastive Explanations for Reinforcement Learning in terms of Expected Consequences
J. V. D. Waa
J. Diggelen
K. Bosch
Mark Antonius Neerincx
OffRL
31
106
0
23 Jul 2018
TESSERACT: Eliminating Experimental Bias in Malware Classification across Space and Time
Feargus Pendlebury
Fabio Pierazzi
Roberto Jordaney
Johannes Kinder
Lorenzo Cavallaro
6
352
0
20 Jul 2018
Take a Look Around: Using Street View and Satellite Images to Estimate House Prices
Stephen Law
Brooks Paige
Chris Russell
15
132
0
18 Jul 2018
RuleMatrix: Visualizing and Understanding Classifiers with Rules
Yao Ming
Huamin Qu
E. Bertini
FAtt
20
214
0
17 Jul 2018
Layer-wise Relevance Propagation for Explainable Recommendations
Homanga Bharadhwaj
FAtt
21
8
0
17 Jul 2018
Automated Data Slicing for Model Validation:A Big data - AI Integration Approach
Yeounoh Chung
Tim Kraska
N. Polyzotis
Ki Hyun Tae
Steven Euijong Whang
19
129
0
16 Jul 2018
Toward Interpretable Deep Reinforcement Learning with Linear Model U-Trees
Guiliang Liu
Oliver Schulte
Wang Zhu
Qingcan Li
AI4CE
17
134
0
16 Jul 2018
A Game-Based Approximate Verification of Deep Neural Networks with Provable Guarantees
Min Wu
Matthew Wicker
Wenjie Ruan
Xiaowei Huang
Marta Kwiatkowska
AAML
19
111
0
10 Jul 2018
Model Agnostic Supervised Local Explanations
Gregory Plumb
Denali Molitor
Ameet Talwalkar
FAtt
LRM
MILM
14
196
0
09 Jul 2018
Women also Snowboard: Overcoming Bias in Captioning Models (Extended Abstract)
Lisa Anne Hendricks
Kaylee Burns
Kate Saenko
Trevor Darrell
Anna Rohrbach
39
477
0
02 Jul 2018
Optimal Piecewise Local-Linear Approximations
Kartik Ahuja
W. Zame
M. Schaar
FAtt
27
1
0
27 Jun 2018
Open the Black Box Data-Driven Explanation of Black Box Decision Systems
D. Pedreschi
F. Giannotti
Riccardo Guidotti
A. Monreale
Luca Pappalardo
Salvatore Ruggieri
Franco Turini
19
38
0
26 Jun 2018
xGEMs: Generating Examplars to Explain Black-Box Models
Shalmali Joshi
Oluwasanmi Koyejo
Been Kim
Joydeep Ghosh
MLAU
25
40
0
22 Jun 2018
Learning Qualitatively Diverse and Interpretable Rules for Classification
A. Ross
Weiwei Pan
Finale Doshi-Velez
16
13
0
22 Jun 2018
Interpretable Discovery in Large Image Data Sets
K. Wagstaff
Jake H. Lee
19
9
0
21 Jun 2018
On the Robustness of Interpretability Methods
David Alvarez-Melis
Tommi Jaakkola
30
522
0
21 Jun 2018
Interpreting Embedding Models of Knowledge Bases: A Pedagogical Approach
A. C. Gusmão
Alvaro H. C. Correia
Glauber De Bona
Fabio Gagliardi Cozman
27
22
0
20 Jun 2018
Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems
Richard J. Tomsett
Dave Braines
Daniel Harborne
Alun D. Preece
Supriyo Chakraborty
FaML
29
164
0
20 Jun 2018
Towards Robust Interpretability with Self-Explaining Neural Networks
David Alvarez-Melis
Tommi Jaakkola
MILM
XAI
56
933
0
20 Jun 2018
DeepAffinity: Interpretable Deep Learning of Compound-Protein Affinity through Unified Recurrent and Convolutional Neural Networks
Mostafa Karimi
Di Wu
Zhangyang Wang
Yang Shen
35
358
0
20 Jun 2018
Previous
1
2
3
...
82
83
84
85
86
Next