ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1602.04938
  4. Cited By
"Why Should I Trust You?": Explaining the Predictions of Any Classifier

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

16 February 2016
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
    FAtt
    FaML
ArXivPDFHTML

Papers citing ""Why Should I Trust You?": Explaining the Predictions of Any Classifier"

50 / 4,267 papers shown
Title
Summit: Scaling Deep Learning Interpretability by Visualizing Activation
  and Attribution Summarizations
Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations
Fred Hohman
Haekyu Park
Caleb Robinson
Duen Horng Chau
FAtt
3DH
HAI
19
214
0
04 Apr 2019
VINE: Visualizing Statistical Interactions in Black Box Models
VINE: Visualizing Statistical Interactions in Black Box Models
M. Britton
FAtt
25
21
0
01 Apr 2019
Explaining Deep Neural Networks with a Polynomial Time Algorithm for
  Shapley Values Approximation
Explaining Deep Neural Networks with a Polynomial Time Algorithm for Shapley Values Approximation
Marco Ancona
Cengiz Öztireli
Markus Gross
FAtt
TDI
27
223
0
26 Mar 2019
Interpreting Neural Networks Using Flip Points
Interpreting Neural Networks Using Flip Points
Roozbeh Yousefzadeh
D. O’Leary
AAML
FAtt
22
17
0
21 Mar 2019
Natural Language Interaction with Explainable AI Models
Natural Language Interaction with Explainable AI Models
Arjun Reddy Akula
S. Todorovic
J. Chai
Song-Chun Zhu
24
23
0
13 Mar 2019
Explaining Anomalies Detected by Autoencoders Using SHAP
Explaining Anomalies Detected by Autoencoders Using SHAP
Liat Antwarg
Ronnie Mindlin Miller
Bracha Shapira
Lior Rokach
FAtt
TDI
19
86
0
06 Mar 2019
Copying Machine Learning Classifiers
Copying Machine Learning Classifiers
Irene Unceta
Jordi Nin
O. Pujol
14
18
0
05 Mar 2019
SAFE ML: Surrogate Assisted Feature Extraction for Model Learning
SAFE ML: Surrogate Assisted Feature Extraction for Model Learning
Alicja Gosiewska
A. Gacek
Piotr Lubon
P. Biecek
20
5
0
28 Feb 2019
Deep learning in bioinformatics: introduction, application, and
  perspective in big data era
Deep learning in bioinformatics: introduction, application, and perspective in big data era
Yu Li
Chao Huang
Lizhong Ding
Zhongxiao Li
Yijie Pan
Xin Gao
AI4CE
24
295
0
28 Feb 2019
Reliable Deep Grade Prediction with Uncertainty Estimation
Reliable Deep Grade Prediction with Uncertainty Estimation
Qian Hu
Huzefa Rangwala
18
39
0
26 Feb 2019
Unmasking Clever Hans Predictors and Assessing What Machines Really
  Learn
Unmasking Clever Hans Predictors and Assessing What Machines Really Learn
Sebastian Lapuschkin
S. Wäldchen
Alexander Binder
G. Montavon
Wojciech Samek
K. Müller
17
996
0
26 Feb 2019
Saliency Learning: Teaching the Model Where to Pay Attention
Saliency Learning: Teaching the Model Where to Pay Attention
Reza Ghaeini
Xiaoli Z. Fern
Hamed Shahbazi
Prasad Tadepalli
FAtt
XAI
29
30
0
22 Feb 2019
Regularizing Black-box Models for Improved Interpretability
Regularizing Black-box Models for Improved Interpretability
Gregory Plumb
Maruan Al-Shedivat
Ángel Alexander Cabrera
Adam Perer
Eric Xing
Ameet Talwalkar
AAML
24
79
0
18 Feb 2019
Ask Not What AI Can Do, But What AI Should Do: Towards a Framework of
  Task Delegability
Ask Not What AI Can Do, But What AI Should Do: Towards a Framework of Task Delegability
Brian Lubars
Chenhao Tan
22
73
0
08 Feb 2019
Human-Centered Tools for Coping with Imperfect Algorithms during Medical
  Decision-Making
Human-Centered Tools for Coping with Imperfect Algorithms during Medical Decision-Making
Carrie J. Cai
Emily Reif
Narayan Hegde
J. Hipp
Been Kim
...
Martin Wattenberg
F. Viégas
G. Corrado
Martin C. Stumpe
Michael Terry
43
397
0
08 Feb 2019
Fairwashing: the risk of rationalization
Fairwashing: the risk of rationalization
Ulrich Aïvodji
Hiromi Arai
O. Fortineau
Sébastien Gambs
Satoshi Hara
Alain Tapp
FaML
19
142
0
28 Jan 2019
Testing Conditional Independence in Supervised Learning Algorithms
Testing Conditional Independence in Supervised Learning Algorithms
David S. Watson
Marvin N. Wright
CML
29
52
0
28 Jan 2019
On the (In)fidelity and Sensitivity for Explanations
On the (In)fidelity and Sensitivity for Explanations
Chih-Kuan Yeh
Cheng-Yu Hsieh
A. Suggala
David I. Inouye
Pradeep Ravikumar
FAtt
39
449
0
27 Jan 2019
The autofeat Python Library for Automated Feature Engineering and
  Selection
The autofeat Python Library for Automated Feature Engineering and Selection
F. Horn
R. Pack
M. Rieger
15
93
0
22 Jan 2019
Explainable Failure Predictions with RNN Classifiers based on Time
  Series Data
Explainable Failure Predictions with RNN Classifiers based on Time Series Data
I. Giurgiu
Anika Schumann
AI4TS
11
8
0
20 Jan 2019
On Network Science and Mutual Information for Explaining Deep Neural
  Networks
On Network Science and Mutual Information for Explaining Deep Neural Networks
Brian Davis
Umang Bhatt
Kartikeya Bhardwaj
R. Marculescu
J. M. F. Moura
FedML
SSL
FAtt
21
10
0
20 Jan 2019
Quantifying Interpretability and Trust in Machine Learning Systems
Quantifying Interpretability and Trust in Machine Learning Systems
Philipp Schmidt
F. Biessmann
16
112
0
20 Jan 2019
Towards Aggregating Weighted Feature Attributions
Towards Aggregating Weighted Feature Attributions
Umang Bhatt
Pradeep Ravikumar
José M. F. Moura
FAtt
TDI
6
13
0
20 Jan 2019
Visual Entailment: A Novel Task for Fine-Grained Image Understanding
Visual Entailment: A Novel Task for Fine-Grained Image Understanding
Ning Xie
Farley Lai
Derek Doran
Asim Kadav
CoGe
53
322
0
20 Jan 2019
Interpretable machine learning: definitions, methods, and applications
Interpretable machine learning: definitions, methods, and applications
W. James Murdoch
Chandan Singh
Karl Kumbier
R. Abbasi-Asl
Bin-Xia Yu
XAI
HAI
49
1,421
0
14 Jan 2019
Enhancing Explainability of Neural Networks through Architecture
  Constraints
Enhancing Explainability of Neural Networks through Architecture Constraints
Zebin Yang
Aijun Zhang
Agus Sudjianto
AAML
16
87
0
12 Jan 2019
Automated Rationale Generation: A Technique for Explainable AI and its
  Effects on Human Perceptions
Automated Rationale Generation: A Technique for Explainable AI and its Effects on Human Perceptions
Upol Ehsan
Pradyumna Tambwekar
Larry Chan
Brent Harrison
Mark O. Riedl
19
237
0
11 Jan 2019
Explaining Vulnerabilities of Deep Learning to Adversarial Malware
  Binaries
Explaining Vulnerabilities of Deep Learning to Adversarial Malware Binaries
Christian Scano
Battista Biggio
Giovanni Lagorio
Fabio Roli
A. Armando
AAML
24
129
0
11 Jan 2019
Interpretable CNNs for Object Classification
Interpretable CNNs for Object Classification
Quanshi Zhang
Xin Eric Wang
Ying Nian Wu
Huilin Zhou
Song-Chun Zhu
15
54
0
08 Jan 2019
Ten ways to fool the masses with machine learning
Ten ways to fool the masses with machine learning
F. Minhas
Amina Asif
Asa Ben-Hur
FedML
HAI
33
5
0
07 Jan 2019
Can You Trust This Prediction? Auditing Pointwise Reliability After
  Learning
Can You Trust This Prediction? Auditing Pointwise Reliability After Learning
Peter F. Schulam
Suchi Saria
OOD
27
103
0
02 Jan 2019
Efficient Search for Diverse Coherent Explanations
Efficient Search for Diverse Coherent Explanations
Chris Russell
17
234
0
02 Jan 2019
Natively Interpretable Machine Learning and Artificial Intelligence:
  Preliminary Results and Future Directions
Natively Interpretable Machine Learning and Artificial Intelligence: Preliminary Results and Future Directions
Christopher J. Hazard
Christopher Fusting
Michael Resnick
Michael Auerbach
M. Meehan
Valeri Korobov
14
8
0
02 Jan 2019
Explaining Aggregates for Exploratory Analytics
Explaining Aggregates for Exploratory Analytics
Fotis Savva
Christos Anagnostopoulos
Peter Triantafillou
11
18
0
29 Dec 2018
A Multi-Objective Anytime Rule Mining System to Ease Iterative Feedback
  from Domain Experts
A Multi-Objective Anytime Rule Mining System to Ease Iterative Feedback from Domain Experts
T. Baum
Steffen Herbold
K. Schneider
11
4
0
23 Dec 2018
Variance reduction for estimation of Shapley effects and adaptation to
  unknown input distribution
Variance reduction for estimation of Shapley effects and adaptation to unknown input distribution
Baptiste Broto
François Bachoc
M. Depecker
FAtt
19
52
0
21 Dec 2018
LEAFAGE: Example-based and Feature importance-based Explanationsfor
  Black-box ML models
LEAFAGE: Example-based and Feature importance-based Explanationsfor Black-box ML models
Ajaya Adhikari
David Tax
R. Satta
M. Faeth
FAtt
25
11
0
21 Dec 2018
Mining Interpretable AOG Representations from Convolutional Networks via
  Active Question Answering
Mining Interpretable AOG Representations from Convolutional Networks via Active Question Answering
Quanshi Zhang
Ruiming Cao
Ying Nian Wu
Song-Chun Zhu
10
14
0
18 Dec 2018
Explaining Neural Networks Semantically and Quantitatively
Explaining Neural Networks Semantically and Quantitatively
Runjin Chen
Hao Chen
Ge Huang
Jie Ren
Quanshi Zhang
FAtt
23
54
0
18 Dec 2018
Interactive Naming for Explaining Deep Neural Networks: A Formative
  Study
Interactive Naming for Explaining Deep Neural Networks: A Formative Study
M. Hamidi-Haines
Zhongang Qi
Alan Fern
Fuxin Li
Prasad Tadepalli
FAtt
HAI
19
11
0
18 Dec 2018
Not Using the Car to See the Sidewalk: Quantifying and Controlling the
  Effects of Context in Classification and Segmentation
Not Using the Car to See the Sidewalk: Quantifying and Controlling the Effects of Context in Classification and Segmentation
Rakshith Shetty
Bernt Schiele
Mario Fritz
29
83
0
17 Dec 2018
Can I trust you more? Model-Agnostic Hierarchical Explanations
Can I trust you more? Model-Agnostic Hierarchical Explanations
Michael Tsang
Youbang Sun
Dongxu Ren
Yan Liu
FAtt
16
25
0
12 Dec 2018
Skin Lesions Classification Using Convolutional Neural Networks in
  Clinical Images
Skin Lesions Classification Using Convolutional Neural Networks in Clinical Images
Danilo Barros Mendes
Nilton Correia da Silva
MedIm
22
46
0
06 Dec 2018
Understanding Individual Decisions of CNNs via Contrastive
  Backpropagation
Understanding Individual Decisions of CNNs via Contrastive Backpropagation
Jindong Gu
Yinchong Yang
Volker Tresp
FAtt
17
94
0
05 Dec 2018
e-SNLI: Natural Language Inference with Natural Language Explanations
e-SNLI: Natural Language Inference with Natural Language Explanations
Oana-Maria Camburu
Tim Rocktaschel
Thomas Lukasiewicz
Phil Blunsom
LRM
287
623
0
04 Dec 2018
Interpretable Deep Learning under Fire
Interpretable Deep Learning under Fire
Xinyang Zhang
Ningfei Wang
Hua Shen
S. Ji
Xiapu Luo
Ting Wang
AAML
AI4CE
30
169
0
03 Dec 2018
A Multidisciplinary Survey and Framework for Design and Evaluation of
  Explainable AI Systems
A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems
Sina Mohseni
Niloofar Zarei
Eric D. Ragan
31
102
0
28 Nov 2018
A Visual Interaction Framework for Dimensionality Reduction Based Data
  Exploration
A Visual Interaction Framework for Dimensionality Reduction Based Data Exploration
M. Cavallo
Çağatay Demiralp
11
55
0
28 Nov 2018
Abduction-Based Explanations for Machine Learning Models
Abduction-Based Explanations for Machine Learning Models
Alexey Ignatiev
Nina Narodytska
Sasha Rubin
FAtt
20
219
0
26 Nov 2018
How to improve the interpretability of kernel learning
How to improve the interpretability of kernel learning
Jinwei Zhao
Qizhou Wang
Yufei Wang
Yu Liu
Zhenghao Shi
Xinhong Hei
FAtt
19
0
0
21 Nov 2018
Previous
123...818283848586
Next