ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1602.04938
  4. Cited By
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
v1v2v3 (latest)

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

16 February 2016
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
    FAttFaML
ArXiv (abs)PDFHTML

Papers citing ""Why Should I Trust You?": Explaining the Predictions of Any Classifier"

50 / 4,971 papers shown
Title
DLIME: A Deterministic Local Interpretable Model-Agnostic Explanations
  Approach for Computer-Aided Diagnosis Systems
DLIME: A Deterministic Local Interpretable Model-Agnostic Explanations Approach for Computer-Aided Diagnosis Systems
Muhammad Rehman Zafar
N. Khan
FAtt
124
159
0
24 Jun 2019
Generating Counterfactual and Contrastive Explanations using SHAP
Generating Counterfactual and Contrastive Explanations using SHAP
Shubham Rathi
77
57
0
21 Jun 2019
Machine Learning Testing: Survey, Landscapes and Horizons
Machine Learning Testing: Survey, Landscapes and Horizons
Jie M. Zhang
Mark Harman
Lei Ma
Yang Liu
VLMAILaw
98
756
0
19 Jun 2019
Trepan Reloaded: A Knowledge-driven Approach to Explaining Artificial
  Neural Networks
Trepan Reloaded: A Knowledge-driven Approach to Explaining Artificial Neural Networks
R. Confalonieri
Tillman Weyde
Tarek R. Besold
Fermín Moscoso del Prado Martín
58
24
0
19 Jun 2019
Incorporating Priors with Feature Attribution on Text Classification
Incorporating Priors with Feature Attribution on Text Classification
Frederick Liu
Besim Avci
FAttFaML
108
120
0
19 Jun 2019
Explanations can be manipulated and geometry is to blame
Explanations can be manipulated and geometry is to blame
Ann-Kathrin Dombrowski
Maximilian Alber
Christopher J. Anders
M. Ackermann
K. Müller
Pan Kessel
AAMLFAtt
88
335
0
19 Jun 2019
VizADS-B: Analyzing Sequences of ADS-B Images Using Explainable
  Convolutional LSTM Encoder-Decoder to Detect Cyber Attacks
VizADS-B: Analyzing Sequences of ADS-B Images Using Explainable Convolutional LSTM Encoder-Decoder to Detect Cyber Attacks
Sefi Akerman
Edan Habler
A. Shabtai
82
18
0
19 Jun 2019
From Clustering to Cluster Explanations via Neural Networks
From Clustering to Cluster Explanations via Neural Networks
Jacob R. Kauffmann
Malte Esders
Lukas Ruff
G. Montavon
Wojciech Samek
K. Müller
79
72
0
18 Jun 2019
Exact and Consistent Interpretation of Piecewise Linear Models Hidden
  behind APIs: A Closed Form Solution
Exact and Consistent Interpretation of Piecewise Linear Models Hidden behind APIs: A Closed Form Solution
Zicun Cong
Lingyang Chu
Lanjun Wang
X. Hu
J. Pei
420
5
0
17 Jun 2019
ASAC: Active Sensing using Actor-Critic models
ASAC: Active Sensing using Actor-Critic models
Jinsung Yoon
James Jordon
M. Schaar
CML
59
16
0
16 Jun 2019
MoËT: Mixture of Expert Trees and its Application to Verifiable
  Reinforcement Learning
MoËT: Mixture of Expert Trees and its Application to Verifiable Reinforcement Learning
Marko Vasic
Andrija Petrović
Kaiyuan Wang
Mladen Nikolic
Rishabh Singh
S. Khurshid
OffRLMoE
96
25
0
16 Jun 2019
Yoga-Veganism: Correlation Mining of Twitter Health Data
Yoga-Veganism: Correlation Mining of Twitter Health Data
Tunazzina Islam
33
23
0
15 Jun 2019
Understanding artificial intelligence ethics and safety
Understanding artificial intelligence ethics and safety
David Leslie
FaMLAI4TS
74
363
0
11 Jun 2019
Toward Best Practices for Explainable B2B Machine Learning
Toward Best Practices for Explainable B2B Machine Learning
Kit Kuksenok
13
0
0
11 Jun 2019
Extracting Interpretable Concept-Based Decision Trees from CNNs
Extracting Interpretable Concept-Based Decision Trees from CNNs
Conner Chyung
Michael Tsang
Yan Liu
FAtt
41
8
0
11 Jun 2019
Quantification and Analysis of Layer-wise and Pixel-wise Information
  Discarding
Quantification and Analysis of Layer-wise and Pixel-wise Information Discarding
Haotian Ma
Hao Zhang
Fan Zhou
Yinqing Zhang
Quanshi Zhang
FAtt
23
0
0
10 Jun 2019
Is Attention Interpretable?
Is Attention Interpretable?
Sofia Serrano
Noah A. Smith
112
687
0
09 Jun 2019
Proposed Guidelines for the Responsible Use of Explainable Machine
  Learning
Proposed Guidelines for the Responsible Use of Explainable Machine Learning
Patrick Hall
Navdeep Gill
N. Schmidt
SILMXAIFaML
77
29
0
08 Jun 2019
ML-LOO: Detecting Adversarial Examples with Feature Attribution
ML-LOO: Detecting Adversarial Examples with Feature Attribution
Puyudi Yang
Jianbo Chen
Cho-Jui Hsieh
Jane-ling Wang
Michael I. Jordan
AAML
93
101
0
08 Jun 2019
Adversarial Explanations for Understanding Image Classification
  Decisions and Improved Neural Network Robustness
Adversarial Explanations for Understanding Image Classification Decisions and Improved Neural Network Robustness
Walt Woods
Jack H Chen
C. Teuscher
AAML
66
46
0
07 Jun 2019
XRAI: Better Attributions Through Regions
XRAI: Better Attributions Through Regions
A. Kapishnikov
Tolga Bolukbasi
Fernanda Viégas
Michael Terry
FAttXAI
74
213
0
06 Jun 2019
Survey on Publicly Available Sinhala Natural Language Processing Tools
  and Research
Survey on Publicly Available Sinhala Natural Language Processing Tools and Research
Nisansa de Silva
224
45
0
05 Jun 2019
Evaluating Explanation Methods for Deep Learning in Security
Evaluating Explanation Methods for Deep Learning in Security
Alexander Warnecke
Dan Arp
Christian Wressnegger
Konrad Rieck
XAIAAMLFAtt
71
94
0
05 Jun 2019
c-Eval: A Unified Metric to Evaluate Feature-based Explanations via
  Perturbation
c-Eval: A Unified Metric to Evaluate Feature-based Explanations via Perturbation
Minh Nhat Vu
Truc D. T. Nguyen
Nhathai Phan
Ralucca Gera
My T. Thai
AAMLFAtt
77
22
0
05 Jun 2019
Interpretable and Differentially Private Predictions
Interpretable and Differentially Private Predictions
Frederik Harder
Matthias Bauer
Mijung Park
FAtt
71
53
0
05 Jun 2019
A Just and Comprehensive Strategy for Using NLP to Address Online Abuse
A Just and Comprehensive Strategy for Using NLP to Address Online Abuse
David Jurgens
Eshwar Chandrasekharan
Libby Hemphill
85
138
0
04 Jun 2019
Learning Interpretable Shapelets for Time Series Classification through
  Adversarial Regularization
Learning Interpretable Shapelets for Time Series Classification through Adversarial Regularization
Yichang Wang
Rémi Emonet
Elisa Fromont
S. Malinowski
Etienne Ménager
Loic Mosser
R. Tavenard
AI4TS
43
12
0
03 Jun 2019
Model Agnostic Contrastive Explanations for Structured Data
Model Agnostic Contrastive Explanations for Structured Data
Amit Dhurandhar
Tejaswini Pedapati
Avinash Balakrishnan
Pin-Yu Chen
Karthikeyan Shanmugam
Ruchi Puri
FAtt
88
83
0
31 May 2019
Do Human Rationales Improve Machine Explanations?
Do Human Rationales Improve Machine Explanations?
Julia Strout
Ye Zhang
Raymond J. Mooney
84
58
0
31 May 2019
Explainability Techniques for Graph Convolutional Networks
Explainability Techniques for Graph Convolutional Networks
Federico Baldassarre
Hossein Azizpour
GNNFAtt
178
272
0
31 May 2019
Leveraging Latent Features for Local Explanations
Leveraging Latent Features for Local Explanations
Ronny Luss
Pin-Yu Chen
Amit Dhurandhar
P. Sattigeri
Yunfeng Zhang
Karthikeyan Shanmugam
Chun-Chen Tu
FAtt
115
38
0
29 May 2019
Learning Representations by Humans, for Humans
Learning Representations by Humans, for Humans
Sophie Hilgard
Nir Rosenfeld
M. Banaji
Jack Cao
David C. Parkes
OCLHAIAI4CE
103
29
0
29 May 2019
Generation of Policy-Level Explanations for Reinforcement Learning
Generation of Policy-Level Explanations for Reinforcement Learning
Nicholay Topin
Manuela Veloso
75
75
0
28 May 2019
Adversarial Robustness Guarantees for Classification with Gaussian
  Processes
Adversarial Robustness Guarantees for Classification with Gaussian Processes
Arno Blaas
A. Patané
Luca Laurenti
L. Cardelli
Marta Z. Kwiatkowska
Stephen J. Roberts
GPAAML
89
21
0
28 May 2019
EDUCE: Explaining model Decisions through Unsupervised Concepts
  Extraction
EDUCE: Explaining model Decisions through Unsupervised Concepts Extraction
Diane Bouchacourt
Ludovic Denoyer
FAtt
74
21
0
28 May 2019
Analyzing the Interpretability Robustness of Self-Explaining Models
Analyzing the Interpretability Robustness of Self-Explaining Models
Haizhong Zheng
Earlence Fernandes
A. Prakash
AAMLLRM
76
7
0
27 May 2019
Infusing domain knowledge in AI-based "black box" models for better
  explainability with application in bankruptcy prediction
Infusing domain knowledge in AI-based "black box" models for better explainability with application in bankruptcy prediction
Sheikh Rabiul Islam
W. Eberle
Sid Bundy
S. Ghafoor
MLAU
69
23
0
27 May 2019
Interpretable Neural Predictions with Differentiable Binary Variables
Interpretable Neural Predictions with Differentiable Binary Variables
Jasmijn Bastings
Wilker Aziz
Ivan Titov
89
215
0
20 May 2019
The Twin-System Approach as One Generic Solution for XAI: An Overview of
  ANN-CBR Twins for Explaining Deep Learning
The Twin-System Approach as One Generic Solution for XAI: An Overview of ANN-CBR Twins for Explaining Deep Learning
Mark T. Keane
Eoin M. Kenny
81
13
0
20 May 2019
Explaining Machine Learning Classifiers through Diverse Counterfactual
  Explanations
Explaining Machine Learning Classifiers through Diverse Counterfactual Explanations
R. Mothilal
Amit Sharma
Chenhao Tan
CML
140
1,032
0
19 May 2019
Disentangled Attribution Curves for Interpreting Random Forests and
  Boosted Trees
Disentangled Attribution Curves for Interpreting Random Forests and Boosted Trees
Summer Devlin
Chandan Singh
W. James Murdoch
Bin Yu
FAtt
54
14
0
18 May 2019
How Case Based Reasoning Explained Neural Networks: An XAI Survey of
  Post-Hoc Explanation-by-Example in ANN-CBR Twins
How Case Based Reasoning Explained Neural Networks: An XAI Survey of Post-Hoc Explanation-by-Example in ANN-CBR Twins
Mark T. Keane
Eoin M. Kenny
125
81
0
17 May 2019
From What to How: An Initial Review of Publicly Available AI Ethics
  Tools, Methods and Research to Translate Principles into Practices
From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices
Jessica Morley
Luciano Floridi
Libby Kinsey
Anat Elhalal
83
57
0
15 May 2019
Modelling urban networks using Variational Autoencoders
Modelling urban networks using Variational Autoencoders
Kira Kempinska
R. Murcio
GNN
42
39
0
14 May 2019
What Clinicians Want: Contextualizing Explainable Machine Learning for
  Clinical End Use
What Clinicians Want: Contextualizing Explainable Machine Learning for Clinical End Use
S. Tonekaboni
Shalmali Joshi
M. Mccradden
Anna Goldenberg
97
400
0
13 May 2019
Explainable AI for Trees: From Local Explanations to Global
  Understanding
Explainable AI for Trees: From Local Explanations to Global Understanding
Scott M. Lundberg
G. Erion
Hugh Chen
A. DeGrave
J. Prutkin
B. Nair
R. Katz
J. Himmelfarb
N. Bansal
Su-In Lee
FAtt
106
291
0
11 May 2019
Interpret Federated Learning with Shapley Values
Interpret Federated Learning with Shapley Values
Guan Wang
FAttFedML
71
92
0
11 May 2019
Hybrid Predictive Model: When an Interpretable Model Collaborates with a
  Black-box Model
Hybrid Predictive Model: When an Interpretable Model Collaborates with a Black-box Model
Tong Wang
Qihang Lin
139
19
0
10 May 2019
Assuring the Machine Learning Lifecycle: Desiderata, Methods, and
  Challenges
Assuring the Machine Learning Lifecycle: Desiderata, Methods, and Challenges
Rob Ashmore
R. Calinescu
Colin Paterson
AI4TS
73
119
0
10 May 2019
Embedding Human Knowledge into Deep Neural Network via Attention Map
Embedding Human Knowledge into Deep Neural Network via Attention Map
Masahiro Mitsuhara
Hiroshi Fukui
Yusuke Sakashita
Takanori Ogata
Tsubasa Hirakawa
Takayoshi Yamashita
H. Fujiyoshi
95
73
0
09 May 2019
Previous
123...909192...9899100
Next