ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1602.04938
  4. Cited By
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
v1v2v3 (latest)

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

16 February 2016
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
    FAttFaML
ArXiv (abs)PDFHTML

Papers citing ""Why Should I Trust You?": Explaining the Predictions of Any Classifier"

50 / 4,966 papers shown
Title
Visual Psychophysics for Making Face Recognition Algorithms More
  Explainable
Visual Psychophysics for Making Face Recognition Algorithms More Explainable
Brandon RichardWebster
So Yon Kwon
Christopher Clarizio
Samuel E. Anthony
Walter J. Scheirer
CVBM
60
41
0
19 Mar 2018
Towards Explanation of DNN-based Prediction with Guided Feature
  Inversion
Towards Explanation of DNN-based Prediction with Guided Feature Inversion
Mengnan Du
Ninghao Liu
Qingquan Song
Helen Zhou
FAtt
97
127
0
19 Mar 2018
Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust
  Deep Learning
Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning
Nicolas Papernot
Patrick McDaniel
OODAAML
156
508
0
13 Mar 2018
Interpreting Deep Classifier by Visual Distillation of Dark Knowledge
Interpreting Deep Classifier by Visual Distillation of Dark Knowledge
Kai Xu
Dae Hoon Park
Chang Yi
Charles Sutton
HAIFAtt
67
26
0
11 Mar 2018
Explaining Black-box Android Malware Detection
Explaining Black-box Android Malware Detection
Marco Melis
Davide Maiorca
Battista Biggio
Giorgio Giacinto
Fabio Roli
AAMLFAtt
49
44
0
09 Mar 2018
The Challenge of Crafting Intelligible Intelligence
The Challenge of Crafting Intelligible Intelligence
Daniel S. Weld
Gagan Bansal
58
244
0
09 Mar 2018
Visual Explanations From Deep 3D Convolutional Neural Networks for
  Alzheimer's Disease Classification
Visual Explanations From Deep 3D Convolutional Neural Networks for Alzheimer's Disease Classification
Chengliang Yang
Anand Rangarajan
Sanjay Ranka
FAtt
63
142
0
07 Mar 2018
Explain Yourself: A Natural Language Interface for Scrutable Autonomous
  Robots
Explain Yourself: A Natural Language Interface for Scrutable Autonomous Robots
Javier Chiyah-Garcia
D. A. Robb
Xingkun Liu
A. Laskov
P. Patrón
H. Hastie
71
25
0
06 Mar 2018
On Cognitive Preferences and the Plausibility of Rule-based Models
On Cognitive Preferences and the Plausibility of Rule-based Models
Johannes Furnkranz
Tomáš Kliegr
Heiko Paulheim
LRM
73
71
0
04 Mar 2018
Improved Explainability of Capsule Networks: Relevance Path by Agreement
Improved Explainability of Capsule Networks: Relevance Path by Agreement
Atefeh Shahroudnejad
Arash Mohammadi
Konstantinos N. Plataniotis
AAMLMedIm
50
62
0
27 Feb 2018
Interpreting Complex Regression Models
Interpreting Complex Regression Models
Noa Avigdor-Elgrabli
Alex Libov
M. Viderman
R. Wolff
37
1
0
26 Feb 2018
Interpretable Charge Predictions for Criminal Cases: Learning to
  Generate Court Views from Fact Descriptions
Interpretable Charge Predictions for Criminal Cases: Learning to Generate Court Views from Fact Descriptions
Hai Ye
Xin Jiang
Zhunchen Luo
Wen-Han Chao
80
132
0
23 Feb 2018
Learning to Explain: An Information-Theoretic Perspective on Model
  Interpretation
Learning to Explain: An Information-Theoretic Perspective on Model Interpretation
Jianbo Chen
Le Song
Martin J. Wainwright
Michael I. Jordan
MLTFAtt
184
576
0
21 Feb 2018
Explanations based on the Missing: Towards Contrastive Explanations with
  Pertinent Negatives
Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives
Amit Dhurandhar
Pin-Yu Chen
Ronny Luss
Chun-Chen Tu
Pai-Shun Ting
Karthikeyan Shanmugam
Payel Das
FAtt
140
592
0
21 Feb 2018
Interpreting Neural Network Judgments via Minimal, Stable, and Symbolic
  Corrections
Interpreting Neural Network Judgments via Minimal, Stable, and Symbolic Corrections
Xin Zhang
Armando Solar-Lezama
Rishabh Singh
FAtt
115
63
0
21 Feb 2018
Teaching Categories to Human Learners with Visual Explanations
Teaching Categories to Human Learners with Visual Explanations
Oisin Mac Aodha
Shihan Su
Yuxin Chen
Pietro Perona
Yisong Yue
135
71
0
20 Feb 2018
Finding Influential Training Samples for Gradient Boosted Decision Trees
Finding Influential Training Samples for Gradient Boosted Decision Trees
B. Sharchilev
Yury Ustinovsky
P. Serdyukov
Maarten de Rijke
TDI
74
57
0
19 Feb 2018
Exact and Consistent Interpretation for Piecewise Linear Neural
  Networks: A Closed Form Solution
Exact and Consistent Interpretation for Piecewise Linear Neural Networks: A Closed Form Solution
Lingyang Chu
X. Hu
Juhua Hu
Lanjun Wang
J. Pei
62
100
0
17 Feb 2018
Explainable Prediction of Medical Codes from Clinical Text
Explainable Prediction of Medical Codes from Clinical Text
J. Mullenbach
Sarah Wiegreffe
J. Duke
Jimeng Sun
Jacob Eisenstein
FAtt
95
576
0
15 Feb 2018
Gaining Free or Low-Cost Transparency with Interpretable Partial
  Substitute
Gaining Free or Low-Cost Transparency with Interpretable Partial Substitute
Tong Wang
50
8
0
12 Feb 2018
Evaluating Compositionality in Sentence Embeddings
Evaluating Compositionality in Sentence Embeddings
Ishita Dasgupta
Demi Guo
Andreas Stuhlmuller
S. Gershman
Noah D. Goodman
CoGe
98
121
0
12 Feb 2018
Consistent Individualized Feature Attribution for Tree Ensembles
Consistent Individualized Feature Attribution for Tree Ensembles
Scott M. Lundberg
G. Erion
Su-In Lee
FAttTDI
86
1,408
0
12 Feb 2018
Influence-Directed Explanations for Deep Convolutional Networks
Influence-Directed Explanations for Deep Convolutional Networks
Klas Leino
S. Sen
Anupam Datta
Matt Fredrikson
Linyi Li
TDIFAtt
113
75
0
11 Feb 2018
Global Model Interpretation via Recursive Partitioning
Global Model Interpretation via Recursive Partitioning
Chengliang Yang
Anand Rangarajan
Sanjay Ranka
FAtt
62
80
0
11 Feb 2018
WorldTree: A Corpus of Explanation Graphs for Elementary Science
  Questions supporting Multi-Hop Inference
WorldTree: A Corpus of Explanation Graphs for Elementary Science Questions supporting Multi-Hop Inference
Peter Alexander Jansen
Elizabeth Wainwright
Steven Marmorstein
Clayton T. Morrison
89
123
0
08 Feb 2018
Granger-causal Attentive Mixtures of Experts: Learning Important
  Features with Neural Networks
Granger-causal Attentive Mixtures of Experts: Learning Important Features with Neural Networks
Patrick Schwab
Djordje Miladinovic
W. Karlen
CML
99
57
0
06 Feb 2018
A Survey Of Methods For Explaining Black Box Models
A Survey Of Methods For Explaining Black Box Models
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
Franco Turini
D. Pedreschi
F. Giannotti
XAI
179
3,996
0
06 Feb 2018
Fairness and Accountability Design Needs for Algorithmic Support in
  High-Stakes Public Sector Decision-Making
Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making
Michael Veale
Max Van Kleek
Reuben Binns
71
424
0
03 Feb 2018
How do Humans Understand Explanations from Machine Learning Systems? An
  Evaluation of the Human-Interpretability of Explanation
How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation
Menaka Narayanan
Emily Chen
Jeffrey He
Been Kim
S. Gershman
Finale Doshi-Velez
FAttXAI
110
243
0
02 Feb 2018
Visual Interpretability for Deep Learning: a Survey
Visual Interpretability for Deep Learning: a Survey
Quanshi Zhang
Song-Chun Zhu
FaMLHAI
179
822
0
02 Feb 2018
Interpretable Deep Convolutional Neural Networks via Meta-learning
Interpretable Deep Convolutional Neural Networks via Meta-learning
Xuan Liu
Xiaoguang Wang
Stan Matwin
FaML
154
38
0
02 Feb 2018
Causal Learning and Explanation of Deep Neural Networks via Autoencoded
  Activations
Causal Learning and Explanation of Deep Neural Networks via Autoencoded Activations
M. Harradon
Jeff Druce
Brian E. Ruttenberg
BDLCML
53
82
0
02 Feb 2018
Interpreting CNNs via Decision Trees
Interpreting CNNs via Decision Trees
Quanshi Zhang
Yu Yang
Ying Nian Wu
Song-Chun Zhu
FAtt
88
323
0
01 Feb 2018
Ít's Reducing a Human Being to a Percentage'; Perceptions of Justice in
  Algorithmic Decisions
Ít's Reducing a Human Being to a Percentage'; Perceptions of Justice in Algorithmic Decisions
Reuben Binns
Max Van Kleek
Michael Veale
Ulrik Lyngs
Jun Zhao
N. Shadbolt
FaML
77
546
0
31 Jan 2018
The Intriguing Properties of Model Explanations
The Intriguing Properties of Model Explanations
Maruan Al-Shedivat
Kumar Avinava Dubey
Eric Xing
FAtt
41
7
0
30 Jan 2018
Visual Analytics in Deep Learning: An Interrogative Survey for the Next
  Frontiers
Visual Analytics in Deep Learning: An Interrogative Survey for the Next Frontiers
Fred Hohman
Minsuk Kahng
Robert S. Pienta
Duen Horng Chau
OODHAI
101
541
0
21 Jan 2018
Evaluating neural network explanation methods using hybrid documents and
  morphological agreement
Evaluating neural network explanation methods using hybrid documents and morphological agreement
Nina Pörner
Benjamin Roth
Hinrich Schütze
60
9
0
19 Jan 2018
A Comparative Study of Rule Extraction for Recurrent Neural Networks
A Comparative Study of Rule Extraction for Recurrent Neural Networks
Qinglong Wang
Kaixuan Zhang
Alexander Ororbia
Masashi Sugiyama
Xue Liu
C. Lee Giles
80
11
0
16 Jan 2018
A Human-Grounded Evaluation Benchmark for Local Explanations of Machine
  Learning
A Human-Grounded Evaluation Benchmark for Local Explanations of Machine Learning
Sina Mohseni
Jeremy E. Block
Eric D. Ragan
FAttXAI
88
61
0
16 Jan 2018
Deep Learning: A Critical Appraisal
Deep Learning: A Critical Appraisal
G. Marcus
HAIVLM
149
1,043
0
02 Jan 2018
What do we need to build explainable AI systems for the medical domain?
What do we need to build explainable AI systems for the medical domain?
Andreas Holzinger
Chris Biemann
C. Pattichis
D. Kell
79
694
0
28 Dec 2017
The Robust Manifold Defense: Adversarial Training using Generative
  Models
The Robust Manifold Defense: Adversarial Training using Generative Models
A. Jalal
Andrew Ilyas
C. Daskalakis
A. Dimakis
AAML
109
174
0
26 Dec 2017
Dropout Feature Ranking for Deep Learning Models
Dropout Feature Ranking for Deep Learning Models
C. Chang
Ladislav Rampášek
Anna Goldenberg
OOD
71
50
0
22 Dec 2017
Towards the Augmented Pathologist: Challenges of Explainable-AI in
  Digital Pathology
Towards the Augmented Pathologist: Challenges of Explainable-AI in Digital Pathology
Andreas Holzinger
Bernd Malle
Peter Kieseberg
P. Roth
Heimo Muller
Robert Reihs
K. Zatloukal
43
91
0
18 Dec 2017
A trans-disciplinary review of deep learning research for water
  resources scientists
A trans-disciplinary review of deep learning research for water resources scientists
Chaopeng Shen
AI4CE
226
699
0
06 Dec 2017
SMILES2Vec: An Interpretable General-Purpose Deep Neural Network for
  Predicting Chemical Properties
SMILES2Vec: An Interpretable General-Purpose Deep Neural Network for Predicting Chemical Properties
Garrett B. Goh
Nathan Oken Hodas
Charles Siegel
Abhinav Vishnu
69
144
0
06 Dec 2017
Explainable AI: Beware of Inmates Running the Asylum Or: How I Learnt to
  Stop Worrying and Love the Social and Behavioural Sciences
Explainable AI: Beware of Inmates Running the Asylum Or: How I Learnt to Stop Worrying and Love the Social and Behavioural Sciences
Tim Miller
Piers Howe
L. Sonenberg
AI4TSSyDa
86
374
0
02 Dec 2017
ConvNets and ImageNet Beyond Accuracy: Understanding Mistakes and
  Uncovering Biases
ConvNets and ImageNet Beyond Accuracy: Understanding Mistakes and Uncovering Biases
Pierre Stock
Moustapha Cissé
FaML
92
46
0
30 Nov 2017
Interpretability Beyond Feature Attribution: Quantitative Testing with
  Concept Activation Vectors (TCAV)
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
FAtt
270
1,848
0
30 Nov 2017
Contextual Outlier Interpretation
Contextual Outlier Interpretation
Ninghao Liu
DongHwa Shin
Helen Zhou
60
72
0
28 Nov 2017
Previous
123...10096979899
Next