ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1602.04938
  4. Cited By
"Why Should I Trust You?": Explaining the Predictions of Any Classifier

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

16 February 2016
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
    FAtt
    FaML
ArXivPDFHTML

Papers citing ""Why Should I Trust You?": Explaining the Predictions of Any Classifier"

50 / 4,309 papers shown
Title
Explainability and Adversarial Robustness for RNNs
Explainability and Adversarial Robustness for RNNs
Alexander Hartl
Maximilian Bachl
J. Fabini
Tanja Zseby
AAML
22
32
0
20 Dec 2019
When Explanations Lie: Why Many Modified BP Attributions Fail
When Explanations Lie: Why Many Modified BP Attributions Fail
Leon Sixt
Maximilian Granz
Tim Landgraf
BDL
FAtt
XAI
13
132
0
20 Dec 2019
A Framework for Explainable Text Classification in Legal Document Review
A Framework for Explainable Text Classification in Legal Document Review
Christian J. Mahoney
Jianping Zhang
Nathaniel Huber-Fliflet
Peter Gronvall
Haozhen Zhao
AILaw
19
32
0
19 Dec 2019
Temporal Fusion Transformers for Interpretable Multi-horizon Time Series
  Forecasting
Temporal Fusion Transformers for Interpretable Multi-horizon Time Series Forecasting
Bryan Lim
Sercan Ö. Arik
Nicolas Loeff
Tomas Pfister
AI4TS
66
1,415
0
19 Dec 2019
Measuring the Quality of Explanations: The System Causability Scale
  (SCS). Comparing Human and Machine Explanations
Measuring the Quality of Explanations: The System Causability Scale (SCS). Comparing Human and Machine Explanations
Andreas Holzinger
André M. Carrington
Heimo Muller
LRM
XAI
ELM
22
300
0
19 Dec 2019
Measuring Non-Expert Comprehension of Machine Learning Fairness Metrics
Measuring Non-Expert Comprehension of Machine Learning Fairness Metrics
Debjani Saha
Candice Schumann
Duncan C. McElfresh
John P. Dickerson
Michelle L. Mazurek
Michael Carl Tschantz
FaML
32
16
0
17 Dec 2019
Fine-grained Classification of Rowing teams
Fine-grained Classification of Rowing teams
M.J.A. van Wezel
L. J. Hamburger
Y. Napolean
32
1
0
11 Dec 2019
Explainability Fact Sheets: A Framework for Systematic Assessment of
  Explainable Approaches
Explainability Fact Sheets: A Framework for Systematic Assessment of Explainable Approaches
Kacper Sokol
Peter A. Flach
XAI
19
299
0
11 Dec 2019
Transparent Classification with Multilayer Logical Perceptrons and
  Random Binarization
Transparent Classification with Multilayer Logical Perceptrons and Random Binarization
Zhuo Wang
Wei Zhang
Ning Liu
Jianyong Wang
19
29
0
10 Dec 2019
Exploratory Not Explanatory: Counterfactual Analysis of Saliency Maps
  for Deep Reinforcement Learning
Exploratory Not Explanatory: Counterfactual Analysis of Saliency Maps for Deep Reinforcement Learning
Akanksha Atrey
Kaleigh Clary
David D. Jensen
FAtt
LRM
21
90
0
09 Dec 2019
Preserving Causal Constraints in Counterfactual Explanations for Machine
  Learning Classifiers
Preserving Causal Constraints in Counterfactual Explanations for Machine Learning Classifiers
Divyat Mahajan
Chenhao Tan
Amit Sharma
OOD
CML
28
206
0
06 Dec 2019
Neural Machine Translation: A Review and Survey
Neural Machine Translation: A Review and Survey
Felix Stahlberg
3DV
AI4TS
MedIm
30
313
0
04 Dec 2019
Counterfactual Explanation Algorithms for Behavioral and Textual Data
Counterfactual Explanation Algorithms for Behavioral and Textual Data
Yanou Ramon
David Martens
F. Provost
Theodoros Evgeniou
FAtt
31
87
0
04 Dec 2019
Explainable artificial intelligence model to predict acute critical
  illness from electronic health records
Explainable artificial intelligence model to predict acute critical illness from electronic health records
S. Lauritsen
Mads Kristensen
Mathias Vassard Olsen
Morten Skaarup Larsen
K. M. Lauritsen
Marianne Johansson Jørgensen
Jeppe Lange
B. Thiesson
21
298
0
03 Dec 2019
Automated Dependence Plots
Automated Dependence Plots
David I. Inouye
Liu Leqi
Joon Sik Kim
Bryon Aragam
Pradeep Ravikumar
12
1
0
02 Dec 2019
ACE -- An Anomaly Contribution Explainer for Cyber-Security Applications
ACE -- An Anomaly Contribution Explainer for Cyber-Security Applications
Xiao Zhang
Manish Marwah
I-Ta Lee
M. Arlitt
Dan Goldwasser
29
14
0
01 Dec 2019
Towards Quantification of Explainability in Explainable Artificial
  Intelligence Methods
Towards Quantification of Explainability in Explainable Artificial Intelligence Methods
Sheikh Rabiul Islam
W. Eberle
S. Ghafoor
XAI
22
42
0
22 Nov 2019
Domain Knowledge Aided Explainable Artificial Intelligence for Intrusion
  Detection and Response
Domain Knowledge Aided Explainable Artificial Intelligence for Intrusion Detection and Response
Sheikh Rabiul Islam
W. Eberle
S. Ghafoor
Ambareen Siraj
Mike Rogers
16
39
0
22 Nov 2019
Natural Language Generation Challenges for Explainable AI
Natural Language Generation Challenges for Explainable AI
Ehud Reiter
19
39
0
20 Nov 2019
LionForests: Local Interpretation of Random Forests
LionForests: Local Interpretation of Random Forests
Ioannis Mollas
Nick Bassiliades
I. Vlahavas
Grigorios Tsoumakas
19
12
0
20 Nov 2019
Distributionally Robust Neural Networks for Group Shifts: On the
  Importance of Regularization for Worst-Case Generalization
Distributionally Robust Neural Networks for Group Shifts: On the Importance of Regularization for Worst-Case Generalization
Shiori Sagawa
Pang Wei Koh
Tatsunori B. Hashimoto
Percy Liang
OOD
16
1,200
0
20 Nov 2019
PRINCE: Provider-side Interpretability with Counterfactual Explanations
  in Recommender Systems
PRINCE: Provider-side Interpretability with Counterfactual Explanations in Recommender Systems
Azin Ghazimatin
Oana Balalau
Rishiraj Saha Roy
Gerhard Weikum
FAtt
27
97
0
19 Nov 2019
An explanation method for Siamese neural networks
An explanation method for Siamese neural networks
Lev V. Utkin
M. Kovalev
E. Kasimov
27
14
0
18 Nov 2019
NeuronInspect: Detecting Backdoors in Neural Networks via Output
  Explanations
NeuronInspect: Detecting Backdoors in Neural Networks via Output Explanations
Xijie Huang
M. Alzantot
Mani B. Srivastava
AAML
17
103
0
18 Nov 2019
Causality-based Feature Selection: Methods and Evaluations
Causality-based Feature Selection: Methods and Evaluations
Kui Yu
Xianjie Guo
Lin Liu
Jiuyong Li
Hao Wang
Zhaolong Ling
Xindong Wu
CML
24
92
0
17 Nov 2019
On the computation of counterfactual explanations -- A survey
On the computation of counterfactual explanations -- A survey
André Artelt
Barbara Hammer
LRM
30
50
0
15 Nov 2019
Question-Conditioned Counterfactual Image Generation for VQA
Question-Conditioned Counterfactual Image Generation for VQA
Jingjing Pan
Yash Goyal
Stefan Lee
EgoV
OOD
22
19
0
14 Nov 2019
Explainable Artificial Intelligence (XAI) for 6G: Improving Trust
  between Human and Machine
Explainable Artificial Intelligence (XAI) for 6G: Improving Trust between Human and Machine
Weisi Guo
32
40
0
11 Nov 2019
Social Bias Frames: Reasoning about Social and Power Implications of
  Language
Social Bias Frames: Reasoning about Social and Power Implications of Language
Maarten Sap
Saadia Gabriel
Lianhui Qin
Dan Jurafsky
Noah A. Smith
Yejin Choi
42
486
0
10 Nov 2019
Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation
  Methods
Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods
Dylan Slack
Sophie Hilgard
Emily Jia
Sameer Singh
Himabindu Lakkaraju
FAtt
AAML
MLAU
35
805
0
06 Nov 2019
Interpretable Self-Attention Temporal Reasoning for Driving Behavior
  Understanding
Interpretable Self-Attention Temporal Reasoning for Driving Behavior Understanding
Yi-Chieh Liu
Yung-An Hsieh
Min-Hung Chen
Chao-Han Huck Yang
Jesper N. Tegnér
Y. Tsai
45
19
0
06 Nov 2019
Weight of Evidence as a Basis for Human-Oriented Explanations
Weight of Evidence as a Basis for Human-Oriented Explanations
David Alvarez-Melis
Hal Daumé
Jennifer Wortman Vaughan
Hanna M. Wallach
XAI
FAtt
24
20
0
29 Oct 2019
Feature relevance quantification in explainable AI: A causal problem
Feature relevance quantification in explainable AI: A causal problem
Dominik Janzing
Lenon Minorics
Patrick Blobaum
FAtt
CML
24
279
0
29 Oct 2019
Rethinking Cooperative Rationalization: Introspective Extraction and
  Complement Control
Rethinking Cooperative Rationalization: Introspective Extraction and Complement Control
Mo Yu
Shiyu Chang
Yang Zhang
Tommi Jaakkola
21
140
0
29 Oct 2019
bLIMEy: Surrogate Prediction Explanations Beyond LIME
bLIMEy: Surrogate Prediction Explanations Beyond LIME
Kacper Sokol
Alexander Hepburn
Raúl Santos-Rodríguez
Peter A. Flach
FAtt
19
38
0
29 Oct 2019
A Game Theoretic Approach to Class-wise Selective Rationalization
A Game Theoretic Approach to Class-wise Selective Rationalization
Shiyu Chang
Yang Zhang
Mo Yu
Tommi Jaakkola
22
60
0
28 Oct 2019
CXPlain: Causal Explanations for Model Interpretation under Uncertainty
CXPlain: Causal Explanations for Model Interpretation under Uncertainty
Patrick Schwab
W. Karlen
FAtt
CML
40
206
0
27 Oct 2019
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies,
  Opportunities and Challenges toward Responsible AI
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
Siham Tabik
...
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
XAI
41
6,125
0
22 Oct 2019
Digital Twin approach to Clinical DSS with Explainable AI
Digital Twin approach to Clinical DSS with Explainable AI
Dattaraj J. Rao
Shraddha Mane
11
14
0
22 Oct 2019
A Decision-Theoretic Approach for Model Interpretability in Bayesian
  Framework
A Decision-Theoretic Approach for Model Interpretability in Bayesian Framework
Homayun Afrabandpey
Tomi Peltola
Juho Piironen
Aki Vehtari
Samuel Kaski
25
3
0
21 Oct 2019
Understanding Deep Networks via Extremal Perturbations and Smooth Masks
Understanding Deep Networks via Extremal Perturbations and Smooth Masks
Ruth C. Fong
Mandela Patrick
Andrea Vedaldi
AAML
25
411
0
18 Oct 2019
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
Chih-Kuan Yeh
Been Kim
Sercan Ö. Arik
Chun-Liang Li
Tomas Pfister
Pradeep Ravikumar
FAtt
122
297
0
17 Oct 2019
Effect of Superpixel Aggregation on Explanations in LIME -- A Case Study
  with Biological Data
Effect of Superpixel Aggregation on Explanations in LIME -- A Case Study with Biological Data
Ludwig Schallner
Johannes Rabold
O. Scholz
Ute Schmid
FAtt
46
22
0
17 Oct 2019
Do Explanations Reflect Decisions? A Machine-centric Strategy to
  Quantify the Performance of Explainability Algorithms
Do Explanations Reflect Decisions? A Machine-centric Strategy to Quantify the Performance of Explainability Algorithms
Z. Q. Lin
M. Shafiee
S. Bochkarev
Michael St. Jules
Xiao Yu Wang
A. Wong
FAtt
29
80
0
16 Oct 2019
Asymmetric Shapley values: incorporating causal knowledge into
  model-agnostic explainability
Asymmetric Shapley values: incorporating causal knowledge into model-agnostic explainability
Christopher Frye
C. Rowat
Ilya Feige
18
180
0
14 Oct 2019
Eavesdrop the Composition Proportion of Training Labels in Federated
  Learning
Eavesdrop the Composition Proportion of Training Labels in Federated Learning
Lixu Wang
Shichao Xu
Tianlin Li
Qi Zhu
FedML
25
63
0
14 Oct 2019
Measuring Unfairness through Game-Theoretic Interpretability
Measuring Unfairness through Game-Theoretic Interpretability
Juliana Cesaro
Fabio Gagliardi Cozman
FAtt
16
13
0
12 Oct 2019
Testing and verification of neural-network-based safety-critical control
  software: A systematic literature review
Testing and verification of neural-network-based safety-critical control software: A systematic literature review
Jin Zhang
Jingyue Li
25
47
0
05 Oct 2019
Can I Trust the Explainer? Verifying Post-hoc Explanatory Methods
Can I Trust the Explainer? Verifying Post-hoc Explanatory Methods
Oana-Maria Camburu
Eleonora Giunchiglia
Jakob N. Foerster
Thomas Lukasiewicz
Phil Blunsom
FAtt
AAML
34
60
0
04 Oct 2019
ConfusionFlow: A model-agnostic visualization for temporal analysis of
  classifier confusion
ConfusionFlow: A model-agnostic visualization for temporal analysis of classifier confusion
A. Hinterreiter
Peter Ruch
Holger Stitz
Martin Ennemoser
J. Bernard
Hendrik Strobelt
M. Streit
27
43
0
02 Oct 2019
Previous
123...798081...858687
Next