ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1602.04938
  4. Cited By
"Why Should I Trust You?": Explaining the Predictions of Any Classifier

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

16 February 2016
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
    FAtt
    FaML
ArXivPDFHTML

Papers citing ""Why Should I Trust You?": Explaining the Predictions of Any Classifier"

50 / 4,309 papers shown
Title
Learning Propagation Rules for Attribution Map Generation
Learning Propagation Rules for Attribution Map Generation
Yiding Yang
Jiayan Qiu
Xiuming Zhang
Dacheng Tao
Xinchao Wang
FAtt
38
17
0
14 Oct 2020
Neural Databases
Neural Databases
James Thorne
Majid Yazdani
Marzieh Saeidi
Fabrizio Silvestri
Sebastian Riedel
A. Halevy
NAI
34
9
0
14 Oct 2020
PGM-Explainer: Probabilistic Graphical Model Explanations for Graph
  Neural Networks
PGM-Explainer: Probabilistic Graphical Model Explanations for Graph Neural Networks
Minh Nhat Vu
My T. Thai
BDL
18
328
0
12 Oct 2020
The elephant in the interpretability room: Why use attention as
  explanation when we have saliency methods?
The elephant in the interpretability room: Why use attention as explanation when we have saliency methods?
Jasmijn Bastings
Katja Filippova
XAI
LRM
64
175
0
12 Oct 2020
Diptychs of human and machine perceptions
Diptychs of human and machine perceptions
Vivien A. Cabannes
Thomas Kerdreux
L. Thiry
28
0
0
12 Oct 2020
A Series of Unfortunate Counterfactual Events: the Role of Time in
  Counterfactual Explanations
A Series of Unfortunate Counterfactual Events: the Role of Time in Counterfactual Explanations
Andrea Ferrario
M. Loi
25
5
0
09 Oct 2020
Why do you think that? Exploring Faithful Sentence-Level Rationales
  Without Supervision
Why do you think that? Exploring Faithful Sentence-Level Rationales Without Supervision
Max Glockner
Ivan Habernal
Iryna Gurevych
LRM
32
25
0
07 Oct 2020
Global Optimization of Objective Functions Represented by ReLU Networks
Global Optimization of Objective Functions Represented by ReLU Networks
Christopher A. Strong
Haoze Wu
Aleksandar Zeljić
Kyle D. Julian
Guy Katz
Clark W. Barrett
Mykel J. Kochenderfer
AAML
17
33
0
07 Oct 2020
PRover: Proof Generation for Interpretable Reasoning over Rules
PRover: Proof Generation for Interpretable Reasoning over Rules
Swarnadeep Saha
Sayan Ghosh
Shashank Srivastava
Joey Tianyi Zhou
ReLM
LRM
36
77
0
06 Oct 2020
Astraea: Grammar-based Fairness Testing
Astraea: Grammar-based Fairness Testing
E. Soremekun
Sakshi Udeshi
Sudipta Chattopadhyay
26
27
0
06 Oct 2020
Visualizing Color-wise Saliency of Black-Box Image Classification Models
Visualizing Color-wise Saliency of Black-Box Image Classification Models
Yuhki Hatakeyama
Hiroki Sakuma
Yoshinori Konishi
Kohei Suenaga
FAtt
32
3
0
06 Oct 2020
Remembering for the Right Reasons: Explanations Reduce Catastrophic
  Forgetting
Remembering for the Right Reasons: Explanations Reduce Catastrophic Forgetting
Sayna Ebrahimi
Suzanne Petryk
Akash Gokul
William Gan
Joseph E. Gonzalez
Marcus Rohrbach
Trevor Darrell
CLL
37
45
0
04 Oct 2020
Explaining Deep Neural Networks
Explaining Deep Neural Networks
Oana-Maria Camburu
XAI
FAtt
38
26
0
04 Oct 2020
Learning Variational Word Masks to Improve the Interpretability of
  Neural Text Classifiers
Learning Variational Word Masks to Improve the Interpretability of Neural Text Classifiers
Hanjie Chen
Yangfeng Ji
AAML
VLM
20
63
0
01 Oct 2020
Interpreting Graph Neural Networks for NLP With Differentiable Edge
  Masking
Interpreting Graph Neural Networks for NLP With Differentiable Edge Masking
M. Schlichtkrull
Nicola De Cao
Ivan Titov
AI4CE
36
214
0
01 Oct 2020
Assessing Robustness of Text Classification through Maximal Safe Radius
  Computation
Assessing Robustness of Text Classification through Maximal Safe Radius Computation
Emanuele La Malfa
Min Wu
Luca Laurenti
Benjie Wang
Anthony Hartshorn
Marta Z. Kwiatkowska
AAML
20
18
0
01 Oct 2020
Interpretable Machine Learning for COVID-19: An Empirical Study on
  Severity Prediction Task
Interpretable Machine Learning for COVID-19: An Empirical Study on Severity Prediction Task
Han-Ching Wu
Wenjie Ruan
Jiangtao Wang
Dingchang Zheng
Bei Liu
...
Xiangfei Chai
Jian Chen
Kunwei Li
Shaolin Li
A. Helal
32
25
0
30 Sep 2020
Accurate and Robust Feature Importance Estimation under Distribution
  Shifts
Accurate and Robust Feature Importance Estimation under Distribution Shifts
Jayaraman J. Thiagarajan
V. Narayanaswamy
Rushil Anirudh
P. Bremer
A. Spanias
OOD
21
9
0
30 Sep 2020
Trustworthy Convolutional Neural Networks: A Gradient Penalized-based
  Approach
Trustworthy Convolutional Neural Networks: A Gradient Penalized-based Approach
Nicholas F Halliwell
Freddy Lecue
FAtt
25
9
0
29 Sep 2020
Where is the Model Looking At?--Concentrate and Explain the Network
  Attention
Where is the Model Looking At?--Concentrate and Explain the Network Attention
Wenjia Xu
Jiuniu Wang
Yang Wang
Guangluan Xu
Wei Dai
Yirong Wu
XAI
32
17
0
29 Sep 2020
A Comprehensive Survey of Machine Learning Applied to Radar Signal
  Processing
A Comprehensive Survey of Machine Learning Applied to Radar Signal Processing
Ping Lang
Xiongjun Fu
M. Martorella
Jian Dong
Rui Qin
Xianpeng Meng
M. Xie
26
39
0
29 Sep 2020
Instance-based Counterfactual Explanations for Time Series
  Classification
Instance-based Counterfactual Explanations for Time Series Classification
Eoin Delaney
Derek Greene
Mark T. Keane
CML
AI4TS
21
89
0
28 Sep 2020
Distillation of Weighted Automata from Recurrent Neural Networks using a
  Spectral Approach
Distillation of Weighted Automata from Recurrent Neural Networks using a Spectral Approach
Rémi Eyraud
Stéphane Ayache
26
16
0
28 Sep 2020
VATLD: A Visual Analytics System to Assess, Understand and Improve
  Traffic Light Detection
VATLD: A Visual Analytics System to Assess, Understand and Improve Traffic Light Detection
Liang Gou
Lincan Zou
Nanxiang Li
M. Hofmann
A. Shekar
A. Wendt
Liu Ren
36
60
0
27 Sep 2020
An Explainable Model for EEG Seizure Detection based on Connectivity
  Features
An Explainable Model for EEG Seizure Detection based on Connectivity Features
Mohamad Mansour
Fouad Khnaisser
Hmayag Partamian
14
6
0
26 Sep 2020
Landscape of R packages for eXplainable Artificial Intelligence
Landscape of R packages for eXplainable Artificial Intelligence
Szymon Maksymiuk
Alicja Gosiewska
P. Biecek
XAI
43
21
0
24 Sep 2020
What Do You See? Evaluation of Explainable Artificial Intelligence (XAI)
  Interpretability through Neural Backdoors
What Do You See? Evaluation of Explainable Artificial Intelligence (XAI) Interpretability through Neural Backdoors
Yi-Shan Lin
Wen-Chuan Lee
Z. Berkay Celik
XAI
34
93
0
22 Sep 2020
Impact of lung segmentation on the diagnosis and explanation of COVID-19
  in chest X-ray images
Impact of lung segmentation on the diagnosis and explanation of COVID-19 in chest X-ray images
Lucas O. Teixeira
R. M. Pereira
Diego Bertolini
Luiz Eduardo Soares de Oliveira
L. Nanni
George D. C. Cavalcanti
Yandre M. G. Costa
23
114
0
21 Sep 2020
Machine Guides, Human Supervises: Interactive Learning with Global
  Explanations
Machine Guides, Human Supervises: Interactive Learning with Global Explanations
Teodora Popordanoska
Mohit Kumar
Stefano Teso
21
21
0
21 Sep 2020
Evaluation of Local Explanation Methods for Multivariate Time Series
  Forecasting
Evaluation of Local Explanation Methods for Multivariate Time Series Forecasting
Ozan Ozyegen
Igor Ilic
Mucahit Cevik
FAtt
AI4TS
24
2
0
18 Sep 2020
Contextual Semantic Interpretability
Contextual Semantic Interpretability
Diego Marcos
Ruth C. Fong
Sylvain Lobry
Rémi Flamary
Nicolas Courty
D. Tuia
SSL
25
27
0
18 Sep 2020
Beyond Individualized Recourse: Interpretable and Interactive Summaries
  of Actionable Recourses
Beyond Individualized Recourse: Interpretable and Interactive Summaries of Actionable Recourses
Kaivalya Rawal
Himabindu Lakkaraju
29
11
0
15 Sep 2020
SCOUTER: Slot Attention-based Classifier for Explainable Image
  Recognition
SCOUTER: Slot Attention-based Classifier for Explainable Image Recognition
Liangzhi Li
Bowen Wang
Manisha Verma
Yuta Nakashima
R. Kawasaki
Hajime Nagahara
OCL
23
49
0
14 Sep 2020
Interpretable Machine Learning Approaches to Prediction of Chronic
  Homelessness
Interpretable Machine Learning Approaches to Prediction of Chronic Homelessness
Blake VanBerlo
Matthew A. S. Ross
Jonathan Rivard
Ryan Booker
6
26
0
12 Sep 2020
A Game-Based Approach for Helping Designers Learn Machine Learning
  Concepts
A Game-Based Approach for Helping Designers Learn Machine Learning Concepts
Chelsea M. Myers
Jiachi Xie
Jichen Zhu
24
4
0
11 Sep 2020
The Intriguing Relation Between Counterfactual Explanations and
  Adversarial Examples
The Intriguing Relation Between Counterfactual Explanations and Adversarial Examples
Timo Freiesleben
GAN
46
62
0
11 Sep 2020
Play MNIST For Me! User Studies on the Effects of Post-Hoc,
  Example-Based Explanations & Error Rates on Debugging a Deep Learning,
  Black-Box Classifier
Play MNIST For Me! User Studies on the Effects of Post-Hoc, Example-Based Explanations & Error Rates on Debugging a Deep Learning, Black-Box Classifier
Courtney Ford
Eoin M. Kenny
Mark T. Keane
23
6
0
10 Sep 2020
On Generating Plausible Counterfactual and Semi-Factual Explanations for
  Deep Learning
On Generating Plausible Counterfactual and Semi-Factual Explanations for Deep Learning
Eoin M. Kenny
Mark T. Keane
28
99
0
10 Sep 2020
Quantifying Explainability of Saliency Methods in Deep Neural Networks
  with a Synthetic Dataset
Quantifying Explainability of Saliency Methods in Deep Neural Networks with a Synthetic Dataset
Erico Tjoa
Cuntai Guan
XAI
FAtt
21
27
0
07 Sep 2020
Explainable Artificial Intelligence for Process Mining: A General
  Overview and Application of a Novel Local Explanation Approach for Predictive
  Process Monitoring
Explainable Artificial Intelligence for Process Mining: A General Overview and Application of a Novel Local Explanation Approach for Predictive Process Monitoring
Nijat Mehdiyev
Peter Fettke
AI4TS
25
55
0
04 Sep 2020
Model extraction from counterfactual explanations
Model extraction from counterfactual explanations
Ulrich Aïvodji
Alexandre Bolot
Sébastien Gambs
MIACV
MLAU
33
51
0
03 Sep 2020
Explainable Empirical Risk Minimization
Explainable Empirical Risk Minimization
Linli Zhang
Georgios Karakasidis
Arina Odnoblyudova
Leyla Dogruel
Alex Jung
27
5
0
03 Sep 2020
Interactive Visual Study of Multiple Attributes Learning Model of X-Ray
  Scattering Images
Interactive Visual Study of Multiple Attributes Learning Model of X-Ray Scattering Images
Xinyi Huang
Suphanut Jamonnak
Ye Zhao
Boyu Wang
Minh Hoai
Kevin Yager
Wei Xu
30
5
0
03 Sep 2020
Soliciting Human-in-the-Loop User Feedback for Interactive Machine
  Learning Reduces User Trust and Impressions of Model Accuracy
Soliciting Human-in-the-Loop User Feedback for Interactive Machine Learning Reduces User Trust and Impressions of Model Accuracy
Donald R. Honeycutt
Mahsan Nourani
Eric D. Ragan
HAI
38
61
0
28 Aug 2020
SHAP values for Explaining CNN-based Text Classification Models
SHAP values for Explaining CNN-based Text Classification Models
Wei Zhao
Tarun Joshi
V. Nair
Agus Sudjianto
FAtt
28
36
0
26 Aug 2020
The Role of Domain Expertise in User Trust and the Impact of First
  Impressions with Intelligent Systems
The Role of Domain Expertise in User Trust and the Impact of First Impressions with Intelligent Systems
Mahsan Nourani
J. King
Eric D. Ragan
25
99
0
20 Aug 2020
XNAP: Making LSTM-based Next Activity Predictions Explainable by Using
  LRP
XNAP: Making LSTM-based Next Activity Predictions Explainable by Using LRP
Sven Weinzierl
Sandra Zilker
Jens Brunk
K. Revoredo
Martin Matzner
J. Becker
28
26
0
18 Aug 2020
Survey of XAI in digital pathology
Survey of XAI in digital pathology
Milda Pocevičiūtė
Gabriel Eilertsen
Claes Lundström
14
56
0
14 Aug 2020
Can We Trust Your Explanations? Sanity Checks for Interpreters in
  Android Malware Analysis
Can We Trust Your Explanations? Sanity Checks for Interpreters in Android Malware Analysis
Ming Fan
Wenying Wei
Xiaofei Xie
Yang Liu
X. Guan
Ting Liu
FAtt
AAML
27
36
0
13 Aug 2020
Explaining Naive Bayes and Other Linear Classifiers with Polynomial Time
  and Delay
Explaining Naive Bayes and Other Linear Classifiers with Polynomial Time and Delay
Sasha Rubin
Thomas Gerspacher
Martin C. Cooper
Alexey Ignatiev
Nina Narodytska
FAtt
30
59
0
13 Aug 2020
Previous
123...747576...858687
Next