ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1602.04938
  4. Cited By
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
v1v2v3 (latest)

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

16 February 2016
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
    FAttFaML
ArXiv (abs)PDFHTML

Papers citing ""Why Should I Trust You?": Explaining the Predictions of Any Classifier"

50 / 4,970 papers shown
Title
Assessing the Local Interpretability of Machine Learning Models
Assessing the Local Interpretability of Machine Learning Models
Dylan Slack
Sorelle A. Friedler
C. Scheidegger
Chitradeep Dutta Roy
FAtt
60
71
0
09 Feb 2019
Ask Not What AI Can Do, But What AI Should Do: Towards a Framework of
  Task Delegability
Ask Not What AI Can Do, But What AI Should Do: Towards a Framework of Task Delegability
Brian Lubars
Chenhao Tan
44
76
0
08 Feb 2019
Human-Centered Tools for Coping with Imperfect Algorithms during Medical
  Decision-Making
Human-Centered Tools for Coping with Imperfect Algorithms during Medical Decision-Making
Carrie J. Cai
Emily Reif
Narayan Hegde
J. Hipp
Been Kim
...
Martin Wattenberg
F. Viégas
G. Corrado
Martin C. Stumpe
Michael Terry
126
405
0
08 Feb 2019
CHIP: Channel-wise Disentangled Interpretation of Deep Convolutional
  Neural Networks
CHIP: Channel-wise Disentangled Interpretation of Deep Convolutional Neural Networks
Xinrui Cui
Dan Wang
F. I. Z. Jane Wang
FAttBDL
36
12
0
07 Feb 2019
Global Explanations of Neural Networks: Mapping the Landscape of
  Predictions
Global Explanations of Neural Networks: Mapping the Landscape of Predictions
Mark Ibrahim
Melissa Louie
C. Modarres
John Paisley
FAtt
90
119
0
06 Feb 2019
Neural Network Attributions: A Causal Perspective
Neural Network Attributions: A Causal Perspective
Aditya Chattopadhyay
Piyushi Manupriya
Anirban Sarkar
V. Balasubramanian
CML
91
146
0
06 Feb 2019
Fooling Neural Network Interpretations via Adversarial Model
  Manipulation
Fooling Neural Network Interpretations via Adversarial Model Manipulation
Juyeon Heo
Sunghwan Joo
Taesup Moon
AAMLFAtt
123
205
0
06 Feb 2019
Explanation in Human-AI Systems: A Literature Meta-Review, Synopsis of
  Key Ideas and Publications, and Bibliography for Explainable AI
Explanation in Human-AI Systems: A Literature Meta-Review, Synopsis of Key Ideas and Publications, and Bibliography for Explainable AI
Shane T. Mueller
R. Hoffman
W. Clancey
Abigail Emrey
Gary Klein
XAI
76
285
0
05 Feb 2019
Saliency Tubes: Visual Explanations for Spatio-Temporal Convolutions
Saliency Tubes: Visual Explanations for Spatio-Temporal Convolutions
Alexandros Stergiou
G. Kapidis
Grigorios Kalliatakis
C. Chrysoulas
R. Veltkamp
R. Poppe
FAtt
67
47
0
04 Feb 2019
Fairwashing: the risk of rationalization
Fairwashing: the risk of rationalization
Ulrich Aïvodji
Hiromi Arai
O. Fortineau
Sébastien Gambs
Satoshi Hara
Alain Tapp
FaML
64
148
0
28 Jan 2019
Testing Conditional Independence in Supervised Learning Algorithms
Testing Conditional Independence in Supervised Learning Algorithms
David S. Watson
Marvin N. Wright
CML
98
53
0
28 Jan 2019
On the (In)fidelity and Sensitivity for Explanations
On the (In)fidelity and Sensitivity for Explanations
Chih-Kuan Yeh
Cheng-Yu Hsieh
A. Suggala
David I. Inouye
Pradeep Ravikumar
FAtt
110
456
0
27 Jan 2019
The autofeat Python Library for Automated Feature Engineering and
  Selection
The autofeat Python Library for Automated Feature Engineering and Selection
F. Horn
R. Pack
M. Rieger
123
98
0
22 Jan 2019
Explainable Failure Predictions with RNN Classifiers based on Time
  Series Data
Explainable Failure Predictions with RNN Classifiers based on Time Series Data
I. Giurgiu
Anika Schumann
AI4TS
20
8
0
20 Jan 2019
On Network Science and Mutual Information for Explaining Deep Neural
  Networks
On Network Science and Mutual Information for Explaining Deep Neural Networks
Brian Davis
Umang Bhatt
Kartikeya Bhardwaj
R. Marculescu
J. M. F. Moura
FedMLSSLFAtt
55
10
0
20 Jan 2019
Quantifying Interpretability and Trust in Machine Learning Systems
Quantifying Interpretability and Trust in Machine Learning Systems
Philipp Schmidt
F. Biessmann
56
115
0
20 Jan 2019
Towards Aggregating Weighted Feature Attributions
Towards Aggregating Weighted Feature Attributions
Umang Bhatt
Pradeep Ravikumar
José M. F. Moura
FAttTDI
34
13
0
20 Jan 2019
Visual Entailment: A Novel Task for Fine-Grained Image Understanding
Visual Entailment: A Novel Task for Fine-Grained Image Understanding
Ning Xie
Farley Lai
Derek Doran
Asim Kadav
CoGe
127
327
0
20 Jan 2019
Interpretable machine learning: definitions, methods, and applications
Interpretable machine learning: definitions, methods, and applications
W. James Murdoch
Chandan Singh
Karl Kumbier
R. Abbasi-Asl
Bin Yu
XAIHAI
211
1,456
0
14 Jan 2019
Enhancing Explainability of Neural Networks through Architecture
  Constraints
Enhancing Explainability of Neural Networks through Architecture Constraints
Zebin Yang
Aijun Zhang
Agus Sudjianto
AAML
52
87
0
12 Jan 2019
Automated Rationale Generation: A Technique for Explainable AI and its
  Effects on Human Perceptions
Automated Rationale Generation: A Technique for Explainable AI and its Effects on Human Perceptions
Upol Ehsan
Pradyumna Tambwekar
Larry Chan
Brent Harrison
Mark O. Riedl
111
245
0
11 Jan 2019
Explaining Vulnerabilities of Deep Learning to Adversarial Malware
  Binaries
Explaining Vulnerabilities of Deep Learning to Adversarial Malware Binaries
Christian Scano
Battista Biggio
Giovanni Lagorio
Fabio Roli
A. Armando
AAML
80
131
0
11 Jan 2019
Interpretable CNNs for Object Classification
Interpretable CNNs for Object Classification
Quanshi Zhang
Xin Eric Wang
Ying Nian Wu
Huilin Zhou
Song-Chun Zhu
61
54
0
08 Jan 2019
Explaining AlphaGo: Interpreting Contextual Effects in Neural Networks
Explaining AlphaGo: Interpreting Contextual Effects in Neural Networks
Zenan Ling
Haotian Ma
Yu Yang
Robert C. Qiu
Song-Chun Zhu
Quanshi Zhang
MILM
33
3
0
08 Jan 2019
Ten ways to fool the masses with machine learning
Ten ways to fool the masses with machine learning
F. Minhas
Amina Asif
Asa Ben-Hur
FedMLHAI
68
5
0
07 Jan 2019
Personalized explanation in machine learning: A conceptualization
Personalized explanation in machine learning: A conceptualization
J. Schneider
J. Handali
XAIFAtt
77
17
0
03 Jan 2019
Can You Trust This Prediction? Auditing Pointwise Reliability After
  Learning
Can You Trust This Prediction? Auditing Pointwise Reliability After Learning
Peter F. Schulam
Suchi Saria
OOD
101
104
0
02 Jan 2019
Efficient Search for Diverse Coherent Explanations
Efficient Search for Diverse Coherent Explanations
Chris Russell
80
241
0
02 Jan 2019
Natively Interpretable Machine Learning and Artificial Intelligence:
  Preliminary Results and Future Directions
Natively Interpretable Machine Learning and Artificial Intelligence: Preliminary Results and Future Directions
Christopher J. Hazard
Christopher Fusting
Michael Resnick
Michael Auerbach
M. Meehan
Valeri Korobov
39
8
0
02 Jan 2019
Explaining Aggregates for Exploratory Analytics
Explaining Aggregates for Exploratory Analytics
Fotis Savva
Christos Anagnostopoulos
Peter Triantafillou
53
18
0
29 Dec 2018
Improving the Interpretability of Deep Neural Networks with Knowledge
  Distillation
Improving the Interpretability of Deep Neural Networks with Knowledge Distillation
Xuan Liu
Xiaoguang Wang
Stan Matwin
HAI
65
101
0
28 Dec 2018
Attention Branch Network: Learning of Attention Mechanism for Visual
  Explanation
Attention Branch Network: Learning of Attention Mechanism for Visual Explanation
Hiroshi Fukui
Tsubasa Hirakawa
Takayoshi Yamashita
H. Fujiyoshi
XAIFAtt
88
409
0
25 Dec 2018
A Multi-Objective Anytime Rule Mining System to Ease Iterative Feedback
  from Domain Experts
A Multi-Objective Anytime Rule Mining System to Ease Iterative Feedback from Domain Experts
T. Baum
Steffen Herbold
K. Schneider
18
4
0
23 Dec 2018
Machine learning and AI research for Patient Benefit: 20 Critical
  Questions on Transparency, Replicability, Ethics and Effectiveness
Machine learning and AI research for Patient Benefit: 20 Critical Questions on Transparency, Replicability, Ethics and Effectiveness
Sebastian J. Vollmer
Bilal A. Mateen
G. Bohner
Franz J. Király
Rayid Ghani
...
Karel G. M. Moons
Gary S. Collins
J. Ioannidis
Chris Holmes
H. Hemingway
124
39
0
21 Dec 2018
Variance reduction for estimation of Shapley effects and adaptation to
  unknown input distribution
Variance reduction for estimation of Shapley effects and adaptation to unknown input distribution
Baptiste Broto
François Bachoc
M. Depecker
FAtt
61
52
0
21 Dec 2018
LEAFAGE: Example-based and Feature importance-based Explanationsfor
  Black-box ML models
LEAFAGE: Example-based and Feature importance-based Explanationsfor Black-box ML models
Ajaya Adhikari
David Tax
R. Satta
M. Faeth
FAtt
111
11
0
21 Dec 2018
Deep Transfer Learning for Static Malware Classification
Deep Transfer Learning for Static Malware Classification
Li-Wei Chen
53
47
0
18 Dec 2018
Explanatory Graphs for CNNs
Explanatory Graphs for CNNs
Quanshi Zhang
Xin Eric Wang
Ruiming Cao
Ying Nian Wu
Feng Shi
Song-Chun Zhu
FAttGNN
36
3
0
18 Dec 2018
Mining Interpretable AOG Representations from Convolutional Networks via
  Active Question Answering
Mining Interpretable AOG Representations from Convolutional Networks via Active Question Answering
Quanshi Zhang
Ruiming Cao
Ying Nian Wu
Song-Chun Zhu
55
14
0
18 Dec 2018
Explaining Neural Networks Semantically and Quantitatively
Explaining Neural Networks Semantically and Quantitatively
Runjin Chen
Hao Chen
Ge Huang
Jie Ren
Quanshi Zhang
FAtt
62
56
0
18 Dec 2018
Interactive Naming for Explaining Deep Neural Networks: A Formative
  Study
Interactive Naming for Explaining Deep Neural Networks: A Formative Study
M. Hamidi-Haines
Zhongang Qi
Alan Fern
Fuxin Li
Prasad Tadepalli
FAttHAI
50
11
0
18 Dec 2018
A Survey of Safety and Trustworthiness of Deep Neural Networks:
  Verification, Testing, Adversarial Attack and Defence, and Interpretability
A Survey of Safety and Trustworthiness of Deep Neural Networks: Verification, Testing, Adversarial Attack and Defence, and Interpretability
Xiaowei Huang
Daniel Kroening
Wenjie Ruan
Marta Kwiatkowska
Youcheng Sun
Emese Thamo
Min Wu
Xinping Yi
AAML
127
51
0
18 Dec 2018
Not Using the Car to See the Sidewalk: Quantifying and Controlling the
  Effects of Context in Classification and Segmentation
Not Using the Car to See the Sidewalk: Quantifying and Controlling the Effects of Context in Classification and Segmentation
Rakshith Shetty
Bernt Schiele
Mario Fritz
81
83
0
17 Dec 2018
Can I trust you more? Model-Agnostic Hierarchical Explanations
Can I trust you more? Model-Agnostic Hierarchical Explanations
Michael Tsang
Youbang Sun
Dongxu Ren
Yan Liu
FAtt
53
26
0
12 Dec 2018
Skin Lesions Classification Using Convolutional Neural Networks in
  Clinical Images
Skin Lesions Classification Using Convolutional Neural Networks in Clinical Images
Danilo Barros Mendes
Nilton Correia da Silva
MedIm
24
46
0
06 Dec 2018
Are you tough enough? Framework for Robustness Validation of Machine
  Comprehension Systems
Are you tough enough? Framework for Robustness Validation of Machine Comprehension Systems
Barbara Rychalska
Dominika Basaj
P. Biecek
OODAAML
45
5
0
05 Dec 2018
Understanding Individual Decisions of CNNs via Contrastive
  Backpropagation
Understanding Individual Decisions of CNNs via Contrastive Backpropagation
Jindong Gu
Yinchong Yang
Volker Tresp
FAtt
79
98
0
05 Dec 2018
e-SNLI: Natural Language Inference with Natural Language Explanations
e-SNLI: Natural Language Inference with Natural Language Explanations
Oana-Maria Camburu
Tim Rocktaschel
Thomas Lukasiewicz
Phil Blunsom
LRM
439
642
0
04 Dec 2018
Sensitivity based Neural Networks Explanations
Sensitivity based Neural Networks Explanations
Enguerrand Horel
Virgile Mison
T. Xiong
K. Giesecke
L. Mangu
AAMLXAIFAtt
70
19
0
03 Dec 2018
Interpretable Deep Learning under Fire
Interpretable Deep Learning under Fire
Xinyang Zhang
Ningfei Wang
Hua Shen
S. Ji
Xiapu Luo
Ting Wang
AAMLAI4CE
138
173
0
03 Dec 2018
Previous
123...929394...9899100
Next