ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1602.04938
  4. Cited By
"Why Should I Trust You?": Explaining the Predictions of Any Classifier

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

16 February 2016
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
    FAtt
    FaML
ArXivPDFHTML

Papers citing ""Why Should I Trust You?": Explaining the Predictions of Any Classifier"

50 / 4,267 papers shown
Title
Toward Interpretable Deep Reinforcement Learning with Linear Model
  U-Trees
Toward Interpretable Deep Reinforcement Learning with Linear Model U-Trees
Guiliang Liu
Oliver Schulte
Wang Zhu
Qingcan Li
AI4CE
17
134
0
16 Jul 2018
A Game-Based Approximate Verification of Deep Neural Networks with
  Provable Guarantees
A Game-Based Approximate Verification of Deep Neural Networks with Provable Guarantees
Min Wu
Matthew Wicker
Wenjie Ruan
Xiaowei Huang
Marta Kwiatkowska
AAML
19
111
0
10 Jul 2018
Model Agnostic Supervised Local Explanations
Model Agnostic Supervised Local Explanations
Gregory Plumb
Denali Molitor
Ameet Talwalkar
FAtt
LRM
MILM
14
196
0
09 Jul 2018
Women also Snowboard: Overcoming Bias in Captioning Models (Extended
  Abstract)
Women also Snowboard: Overcoming Bias in Captioning Models (Extended Abstract)
Lisa Anne Hendricks
Kaylee Burns
Kate Saenko
Trevor Darrell
Anna Rohrbach
39
477
0
02 Jul 2018
Optimal Piecewise Local-Linear Approximations
Optimal Piecewise Local-Linear Approximations
Kartik Ahuja
W. Zame
M. Schaar
FAtt
27
1
0
27 Jun 2018
Open the Black Box Data-Driven Explanation of Black Box Decision Systems
Open the Black Box Data-Driven Explanation of Black Box Decision Systems
D. Pedreschi
F. Giannotti
Riccardo Guidotti
A. Monreale
Luca Pappalardo
Salvatore Ruggieri
Franco Turini
19
38
0
26 Jun 2018
xGEMs: Generating Examplars to Explain Black-Box Models
xGEMs: Generating Examplars to Explain Black-Box Models
Shalmali Joshi
Oluwasanmi Koyejo
Been Kim
Joydeep Ghosh
MLAU
25
40
0
22 Jun 2018
Learning Qualitatively Diverse and Interpretable Rules for
  Classification
Learning Qualitatively Diverse and Interpretable Rules for Classification
A. Ross
Weiwei Pan
Finale Doshi-Velez
16
13
0
22 Jun 2018
Interpretable Discovery in Large Image Data Sets
Interpretable Discovery in Large Image Data Sets
K. Wagstaff
Jake H. Lee
22
9
0
21 Jun 2018
On the Robustness of Interpretability Methods
On the Robustness of Interpretability Methods
David Alvarez-Melis
Tommi Jaakkola
30
522
0
21 Jun 2018
Interpreting Embedding Models of Knowledge Bases: A Pedagogical Approach
Interpreting Embedding Models of Knowledge Bases: A Pedagogical Approach
A. C. Gusmão
Alvaro H. C. Correia
Glauber De Bona
Fabio Gagliardi Cozman
27
22
0
20 Jun 2018
Interpretable to Whom? A Role-based Model for Analyzing Interpretable
  Machine Learning Systems
Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems
Richard J. Tomsett
Dave Braines
Daniel Harborne
Alun D. Preece
Supriyo Chakraborty
FaML
29
164
0
20 Jun 2018
Towards Robust Interpretability with Self-Explaining Neural Networks
Towards Robust Interpretability with Self-Explaining Neural Networks
David Alvarez-Melis
Tommi Jaakkola
MILM
XAI
56
933
0
20 Jun 2018
DeepAffinity: Interpretable Deep Learning of Compound-Protein Affinity
  through Unified Recurrent and Convolutional Neural Networks
DeepAffinity: Interpretable Deep Learning of Compound-Protein Affinity through Unified Recurrent and Convolutional Neural Networks
Mostafa Karimi
Di Wu
Zhangyang Wang
Yang Shen
35
358
0
20 Jun 2018
Contrastive Explanations with Local Foil Trees
Contrastive Explanations with Local Foil Trees
J. V. D. Waa
M. Robeer
J. Diggelen
Matthieu J. S. Brinkhuis
Mark Antonius Neerincx
FAtt
21
82
0
19 Jun 2018
RISE: Randomized Input Sampling for Explanation of Black-box Models
RISE: Randomized Input Sampling for Explanation of Black-box Models
Vitali Petsiuk
Abir Das
Kate Saenko
FAtt
76
1,153
0
19 Jun 2018
Instance-Level Explanations for Fraud Detection: A Case Study
Instance-Level Explanations for Fraud Detection: A Case Study
Dennis Collaris
L. M. Vink
J. V. Wijk
31
31
0
19 Jun 2018
Biased Embeddings from Wild Data: Measuring, Understanding and Removing
Biased Embeddings from Wild Data: Measuring, Understanding and Removing
Adam Sutton
Thomas Lansdall-Welfare
N. Cristianini
26
23
0
16 Jun 2018
Right for the Right Reason: Training Agnostic Networks
Right for the Right Reason: Training Agnostic Networks
Sen Jia
Thomas Lansdall-Welfare
N. Cristianini
FaML
17
26
0
16 Jun 2018
Hierarchical interpretations for neural network predictions
Hierarchical interpretations for neural network predictions
Chandan Singh
W. James Murdoch
Bin Yu
31
145
0
14 Jun 2018
Understanding Patch-Based Learning by Explaining Predictions
Understanding Patch-Based Learning by Explaining Predictions
Christopher J. Anders
G. Montavon
Wojciech Samek
K. Müller
UQCV
FAtt
33
6
0
11 Jun 2018
Assessing the impact of machine intelligence on human behaviour: an
  interdisciplinary endeavour
Assessing the impact of machine intelligence on human behaviour: an interdisciplinary endeavour
Emilia Gómez
Carlos Castillo
V. Charisi
V. Dahl
G. Deco
...
Núria Sebastián
Xavier Serra
Joan Serrà
Songül Tolan
Karina Vold
11
11
0
07 Jun 2018
Performance Metric Elicitation from Pairwise Classifier Comparisons
Performance Metric Elicitation from Pairwise Classifier Comparisons
G. Hiranandani
Shant Boodaghians
R. Mehta
Oluwasanmi Koyejo
19
14
0
05 Jun 2018
Explaining Explanations: An Overview of Interpretability of Machine
  Learning
Explaining Explanations: An Overview of Interpretability of Machine Learning
Leilani H. Gilpin
David Bau
Ben Z. Yuan
Ayesha Bajwa
Michael A. Specter
Lalana Kagal
XAI
40
1,842
0
31 May 2018
Human-in-the-Loop Interpretability Prior
Human-in-the-Loop Interpretability Prior
Isaac Lage
A. Ross
Been Kim
S. Gershman
Finale Doshi-Velez
32
120
0
29 May 2018
Lightly-supervised Representation Learning with Global Interpretability
Lightly-supervised Representation Learning with Global Interpretability
M. A. Valenzuela-Escarcega
Ajay Nagesh
Mihai Surdeanu
SSL
17
23
0
29 May 2018
Local Rule-Based Explanations of Black Box Decision Systems
Local Rule-Based Explanations of Black Box Decision Systems
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
D. Pedreschi
Franco Turini
F. Giannotti
31
435
0
28 May 2018
RetainVis: Visual Analytics with Interpretable and Interactive Recurrent
  Neural Networks on Electronic Medical Records
RetainVis: Visual Analytics with Interpretable and Interactive Recurrent Neural Networks on Electronic Medical Records
Bum Chul Kwon
Min-Je Choi
J. Kim
Edward Choi
Young Bin Kim
Soonwook Kwon
Jimeng Sun
Jaegul Choo
36
251
0
28 May 2018
"Why Should I Trust Interactive Learners?" Explaining Interactive
  Queries of Classifiers to Users
"Why Should I Trust Interactive Learners?" Explaining Interactive Queries of Classifiers to Users
Stefano Teso
Kristian Kersting
FAtt
HAI
25
12
0
22 May 2018
Defoiling Foiled Image Captions
Defoiling Foiled Image Captions
Pranava Madhyastha
Josiah Wang
Lucia Specia
27
9
0
16 May 2018
Towards Explaining Anomalies: A Deep Taylor Decomposition of One-Class
  Models
Towards Explaining Anomalies: A Deep Taylor Decomposition of One-Class Models
Jacob R. Kauffmann
K. Müller
G. Montavon
DRL
42
96
0
16 May 2018
SoPa: Bridging CNNs, RNNs, and Weighted Finite-State Machines
SoPa: Bridging CNNs, RNNs, and Weighted Finite-State Machines
Roy Schwartz
Sam Thomson
Noah A. Smith
30
24
0
15 May 2018
Did the Model Understand the Question?
Did the Model Understand the Question?
Pramod Kaushik Mudrakarta
Ankur Taly
Mukund Sundararajan
Kedar Dhamdhere
ELM
OOD
FAtt
27
196
0
14 May 2018
Behavior Analysis of NLI Models: Uncovering the Influence of Three
  Factors on Robustness
Behavior Analysis of NLI Models: Uncovering the Influence of Three Factors on Robustness
V. Carmona
Jeff Mitchell
Sebastian Riedel
27
44
0
11 May 2018
A Symbolic Approach to Explaining Bayesian Network Classifiers
A Symbolic Approach to Explaining Bayesian Network Classifiers
Andy Shih
Arthur Choi
Adnan Darwiche
FAtt
13
237
0
09 May 2018
Explainable Recommendation: A Survey and New Perspectives
Explainable Recommendation: A Survey and New Perspectives
Yongfeng Zhang
Xu Chen
XAI
LRM
52
866
0
30 Apr 2018
Seq2Seq-Vis: A Visual Debugging Tool for Sequence-to-Sequence Models
Seq2Seq-Vis: A Visual Debugging Tool for Sequence-to-Sequence Models
Hendrik Strobelt
Sebastian Gehrmann
M. Behrisch
Adam Perer
Hanspeter Pfister
Alexander M. Rush
VLM
HAI
31
239
0
25 Apr 2018
Understanding Community Structure in Layered Neural Networks
Understanding Community Structure in Layered Neural Networks
C. Watanabe
Kaoru Hiramatsu
K. Kashino
13
22
0
13 Apr 2018
Visual Analytics for Explainable Deep Learning
Visual Analytics for Explainable Deep Learning
Jaegul Choo
Shixia Liu
HAI
XAI
14
235
0
07 Apr 2018
Enslaving the Algorithm: From a "Right to an Explanation" to a "Right to
  Better Decisions"?
Enslaving the Algorithm: From a "Right to an Explanation" to a "Right to Better Decisions"?
L. Edwards
Michael Veale
FaML
AILaw
20
134
0
20 Mar 2018
Explanation Methods in Deep Learning: Users, Values, Concerns and
  Challenges
Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges
Gabrielle Ras
Marcel van Gerven
W. Haselager
XAI
17
217
0
20 Mar 2018
Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust
  Deep Learning
Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning
Nicolas Papernot
Patrick McDaniel
OOD
AAML
13
503
0
13 Mar 2018
Explaining Black-box Android Malware Detection
Explaining Black-box Android Malware Detection
Marco Melis
Davide Maiorca
Battista Biggio
Giorgio Giacinto
Fabio Roli
AAML
FAtt
9
43
0
09 Mar 2018
The Challenge of Crafting Intelligible Intelligence
The Challenge of Crafting Intelligible Intelligence
Daniel S. Weld
Gagan Bansal
26
241
0
09 Mar 2018
Visual Explanations From Deep 3D Convolutional Neural Networks for
  Alzheimer's Disease Classification
Visual Explanations From Deep 3D Convolutional Neural Networks for Alzheimer's Disease Classification
Chengliang Yang
Anand Rangarajan
Sanjay Ranka
FAtt
21
140
0
07 Mar 2018
Explain Yourself: A Natural Language Interface for Scrutable Autonomous
  Robots
Explain Yourself: A Natural Language Interface for Scrutable Autonomous Robots
Javier Chiyah-Garcia
D. A. Robb
Xingkun Liu
A. Laskov
P. Patrón
H. Hastie
24
25
0
06 Mar 2018
Learning to Explain: An Information-Theoretic Perspective on Model
  Interpretation
Learning to Explain: An Information-Theoretic Perspective on Model Interpretation
Jianbo Chen
Le Song
Martin J. Wainwright
Michael I. Jordan
MLT
FAtt
44
561
0
21 Feb 2018
Interpreting Neural Network Judgments via Minimal, Stable, and Symbolic
  Corrections
Interpreting Neural Network Judgments via Minimal, Stable, and Symbolic Corrections
Xin Zhang
Armando Solar-Lezama
Rishabh Singh
FAtt
27
63
0
21 Feb 2018
Teaching Categories to Human Learners with Visual Explanations
Teaching Categories to Human Learners with Visual Explanations
Oisin Mac Aodha
Shihan Su
Yuxin Chen
Pietro Perona
Yisong Yue
21
70
0
20 Feb 2018
Explainable Prediction of Medical Codes from Clinical Text
Explainable Prediction of Medical Codes from Clinical Text
J. Mullenbach
Sarah Wiegreffe
J. Duke
Jimeng Sun
Jacob Eisenstein
FAtt
29
567
0
15 Feb 2018
Previous
123...83848586
Next