ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1602.04938
  4. Cited By
"Why Should I Trust You?": Explaining the Predictions of Any Classifier

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

16 February 2016
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
    FAtt
    FaML
ArXivPDFHTML

Papers citing ""Why Should I Trust You?": Explaining the Predictions of Any Classifier"

50 / 4,309 papers shown
Title
Self-Explaining Structures Improve NLP Models
Self-Explaining Structures Improve NLP Models
Zijun Sun
Chun Fan
Qinghong Han
Xiaofei Sun
Yuxian Meng
Fei Wu
Jiwei Li
MILM
XAI
LRM
FAtt
48
38
0
03 Dec 2020
Explainable AI for Software Engineering
Explainable AI for Software Engineering
Chakkrit Tantithamthavorn
Jirayus Jiarpakdee
J. Grundy
29
58
0
03 Dec 2020
Evaluating Explanations: How much do explanations from the teacher aid
  students?
Evaluating Explanations: How much do explanations from the teacher aid students?
Danish Pruthi
Rachit Bansal
Bhuwan Dhingra
Livio Baldini Soares
Michael Collins
Zachary Chase Lipton
Graham Neubig
William W. Cohen
FAtt
XAI
22
108
0
01 Dec 2020
Every Model Learned by Gradient Descent Is Approximately a Kernel
  Machine
Every Model Learned by Gradient Descent Is Approximately a Kernel Machine
Pedro M. Domingos
MLT
29
71
0
30 Nov 2020
Why model why? Assessing the strengths and limitations of LIME
Why model why? Assessing the strengths and limitations of LIME
Jurgen Dieber
S. Kirrane
FAtt
26
97
0
30 Nov 2020
TimeSHAP: Explaining Recurrent Models through Sequence Perturbations
TimeSHAP: Explaining Recurrent Models through Sequence Perturbations
João Bento
Pedro Saleiro
André F. Cruz
Mário A. T. Figueiredo
P. Bizarro
FAtt
AI4TS
24
88
0
30 Nov 2020
ProtoPShare: Prototype Sharing for Interpretable Image Classification
  and Similarity Discovery
ProtoPShare: Prototype Sharing for Interpretable Image Classification and Similarity Discovery
Dawid Rymarczyk
Lukasz Struski
Jacek Tabor
Bartosz Zieliñski
21
111
0
29 Nov 2020
Reflective-Net: Learning from Explanations
Reflective-Net: Learning from Explanations
Johannes Schneider
Michalis Vlachos
FAtt
OffRL
LRM
57
18
0
27 Nov 2020
Explainable AI for ML jet taggers using expert variables and layerwise
  relevance propagation
Explainable AI for ML jet taggers using expert variables and layerwise relevance propagation
G. Agarwal
L. Hay
I. Iashvili
Benjamin Mannix
C. McLean
Margaret E. Morris
S. Rappoccio
U. Schubert
46
18
0
26 Nov 2020
Understand Watchdogs: Discover How Game Bot Get Discovered
Understand Watchdogs: Discover How Game Bot Get Discovered
Eunji Park
Kyoung Ho Park
H. Kim
9
4
0
26 Nov 2020
Towards Interpretable Multilingual Detection of Hate Speech against
  Immigrants and Women in Twitter at SemEval-2019 Task 5
Towards Interpretable Multilingual Detection of Hate Speech against Immigrants and Women in Twitter at SemEval-2019 Task 5
A. Ishmam
21
1
0
26 Nov 2020
Probing Model Signal-Awareness via Prediction-Preserving Input
  Minimization
Probing Model Signal-Awareness via Prediction-Preserving Input Minimization
Sahil Suneja
Yunhui Zheng
Yufan Zhuang
Jim Laredo
Alessandro Morari
AAML
32
33
0
25 Nov 2020
Right for the Right Concept: Revising Neuro-Symbolic Concepts by
  Interacting with their Explanations
Right for the Right Concept: Revising Neuro-Symbolic Concepts by Interacting with their Explanations
Wolfgang Stammer
P. Schramowski
Kristian Kersting
FAtt
14
107
0
25 Nov 2020
Explaining by Removing: A Unified Framework for Model Explanation
Explaining by Removing: A Unified Framework for Model Explanation
Ian Covert
Scott M. Lundberg
Su-In Lee
FAtt
53
243
0
21 Nov 2020
Gradient Starvation: A Learning Proclivity in Neural Networks
Gradient Starvation: A Learning Proclivity in Neural Networks
Mohammad Pezeshki
Sekouba Kaba
Yoshua Bengio
Aaron Courville
Doina Precup
Guillaume Lajoie
MLT
54
259
0
18 Nov 2020
Declarative Approaches to Counterfactual Explanations for Classification
Declarative Approaches to Counterfactual Explanations for Classification
Leopoldo Bertossi
44
17
0
15 Nov 2020
Deep Interpretable Classification and Weakly-Supervised Segmentation of
  Histology Images via Max-Min Uncertainty
Deep Interpretable Classification and Weakly-Supervised Segmentation of Histology Images via Max-Min Uncertainty
Soufiane Belharbi
Jérôme Rony
Jose Dolz
Ismail Ben Ayed
Luke McCaffrey
Eric Granger
24
52
0
14 Nov 2020
Robust and Stable Black Box Explanations
Robust and Stable Black Box Explanations
Himabindu Lakkaraju
Nino Arsov
Osbert Bastani
AAML
FAtt
24
84
0
12 Nov 2020
What Did You Think Would Happen? Explaining Agent Behaviour Through
  Intended Outcomes
What Did You Think Would Happen? Explaining Agent Behaviour Through Intended Outcomes
Herman Yau
Chris Russell
Simon Hadfield
FAtt
LRM
28
36
0
10 Nov 2020
Towards Unifying Feature Attribution and Counterfactual Explanations:
  Different Means to the Same End
Towards Unifying Feature Attribution and Counterfactual Explanations: Different Means to the Same End
R. Mothilal
Divyat Mahajan
Chenhao Tan
Amit Sharma
FAtt
CML
32
100
0
10 Nov 2020
Parameterized Explainer for Graph Neural Network
Parameterized Explainer for Graph Neural Network
Dongsheng Luo
Wei Cheng
Dongkuan Xu
Wenchao Yu
Bo Zong
Haifeng Chen
Xiang Zhang
53
542
0
09 Nov 2020
Explaining Deep Graph Networks with Molecular Counterfactuals
Explaining Deep Graph Networks with Molecular Counterfactuals
Danilo Numeroso
D. Bacciu
21
10
0
09 Nov 2020
Adversarial Black-Box Attacks On Text Classifiers Using Multi-Objective
  Genetic Optimization Guided By Deep Networks
Adversarial Black-Box Attacks On Text Classifiers Using Multi-Objective Genetic Optimization Guided By Deep Networks
Alex Mathai
Shreya Khare
Srikanth G. Tamilselvam
Senthil Mani
AAML
36
6
0
08 Nov 2020
Explainable Automated Fact-Checking: A Survey
Explainable Automated Fact-Checking: A Survey
Neema Kotonya
Francesca Toni
8
113
0
07 Nov 2020
Feature Removal Is a Unifying Principle for Model Explanation Methods
Feature Removal Is a Unifying Principle for Model Explanation Methods
Ian Covert
Scott M. Lundberg
Su-In Lee
FAtt
33
33
0
06 Nov 2020
Learning Causal Semantic Representation for Out-of-Distribution
  Prediction
Learning Causal Semantic Representation for Out-of-Distribution Prediction
Chang-Shu Liu
Xinwei Sun
Jindong Wang
Haoyue Tang
Tao Li
Tao Qin
Wei Chen
Tie-Yan Liu
CML
OODD
OOD
35
104
0
03 Nov 2020
Causal Shapley Values: Exploiting Causal Knowledge to Explain Individual
  Predictions of Complex Models
Causal Shapley Values: Exploiting Causal Knowledge to Explain Individual Predictions of Complex Models
Tom Heskes
E. Sijben
I. G. Bucur
Tom Claassen
FAtt
TDI
25
151
0
03 Nov 2020
Quadratic Metric Elicitation for Fairness and Beyond
Quadratic Metric Elicitation for Fairness and Beyond
Gaurush Hiranandani
Jatin Mathur
Harikrishna Narasimhan
Oluwasanmi Koyejo
27
5
0
03 Nov 2020
MAIRE -- A Model-Agnostic Interpretable Rule Extraction Procedure for
  Explaining Classifiers
MAIRE -- A Model-Agnostic Interpretable Rule Extraction Procedure for Explaining Classifiers
Rajat Sharma
N. Reddy
V. Kamakshi
N. C. Krishnan
Shweta Jain
FAtt
29
7
0
03 Nov 2020
Weakly- and Semi-supervised Evidence Extraction
Weakly- and Semi-supervised Evidence Extraction
Danish Pruthi
Bhuwan Dhingra
Graham Neubig
Zachary Chase Lipton
9
23
0
03 Nov 2020
Towards Ethics by Design in Online Abusive Content Detection
Towards Ethics by Design in Online Abusive Content Detection
S. Kiritchenko
I. Nejadgholi
34
13
0
28 Oct 2020
Selective Classification Can Magnify Disparities Across Groups
Selective Classification Can Magnify Disparities Across Groups
Erik Jones
Shiori Sagawa
Pang Wei Koh
Ananya Kumar
Percy Liang
39
46
0
27 Oct 2020
A robust low data solution: dimension prediction of semiconductor
  nanorods
A robust low data solution: dimension prediction of semiconductor nanorods
Xiaoli Liu
Yang Xu
Jiali Li
Xuanwei Ong
S. A. Ibrahim
Tonio Buonassisi
Xiaonan Wang
14
11
0
27 Oct 2020
Interpretation of NLP models through input marginalization
Interpretation of NLP models through input marginalization
Siwon Kim
Jihun Yi
Eunji Kim
Sungroh Yoon
MILM
FAtt
30
58
0
27 Oct 2020
GPUTreeShap: Massively Parallel Exact Calculation of SHAP Scores for
  Tree Ensembles
GPUTreeShap: Massively Parallel Exact Calculation of SHAP Scores for Tree Ensembles
Rory Mitchell
E. Frank
G. Holmes
14
55
0
27 Oct 2020
Enforcing Interpretability and its Statistical Impacts: Trade-offs
  between Accuracy and Interpretability
Enforcing Interpretability and its Statistical Impacts: Trade-offs between Accuracy and Interpretability
Gintare Karolina Dziugaite
Shai Ben-David
Daniel M. Roy
FaML
17
39
0
26 Oct 2020
Exemplary Natural Images Explain CNN Activations Better than
  State-of-the-Art Feature Visualization
Exemplary Natural Images Explain CNN Activations Better than State-of-the-Art Feature Visualization
Judy Borowski
Roland S. Zimmermann
Judith Schepers
Robert Geirhos
Thomas S. A. Wallis
Matthias Bethge
Wieland Brendel
FAtt
47
7
0
23 Oct 2020
A Multilinear Sampling Algorithm to Estimate Shapley Values
A Multilinear Sampling Algorithm to Estimate Shapley Values
Ramin Okhrati
Aldo Lipani
TDI
FAtt
87
38
0
22 Oct 2020
Meta-trained agents implement Bayes-optimal agents
Meta-trained agents implement Bayes-optimal agents
Vladimir Mikulik
Grégoire Delétang
Tom McGrath
Tim Genewein
Miljan Martic
Shane Legg
Pedro A. Ortega
OOD
FedML
40
41
0
21 Oct 2020
Axiom Learning and Belief Tracing for Transparent Decision Making in
  Robotics
Axiom Learning and Belief Tracing for Transparent Decision Making in Robotics
Tiago Mota
Mohan Sridharan
21
5
0
20 Oct 2020
Interpreting convolutional networks trained on textual data
Interpreting convolutional networks trained on textual data
Reza Marzban
Christopher Crick
FAtt
27
3
0
20 Oct 2020
Counterfactual Explanations and Algorithmic Recourses for Machine
  Learning: A Review
Counterfactual Explanations and Algorithmic Recourses for Machine Learning: A Review
Sahil Verma
Varich Boonsanong
Minh Hoang
Keegan E. Hines
John P. Dickerson
Chirag Shah
CML
28
164
0
20 Oct 2020
A Survey on Deep Learning and Explainability for Automatic Report
  Generation from Medical Images
A Survey on Deep Learning and Explainability for Automatic Report Generation from Medical Images
Pablo Messina
Pablo Pino
Denis Parra
Alvaro Soto
Cecilia Besa
S. Uribe
Marcelo andía
C. Tejos
Claudia Prieto
Daniel Capurro
MedIm
36
62
0
20 Oct 2020
Explainable Automated Fact-Checking for Public Health Claims
Explainable Automated Fact-Checking for Public Health Claims
Neema Kotonya
Francesca Toni
218
251
0
19 Oct 2020
Optimism in the Face of Adversity: Understanding and Improving Deep
  Learning through Adversarial Robustness
Optimism in the Face of Adversity: Understanding and Improving Deep Learning through Adversarial Robustness
Guillermo Ortiz-Jiménez
Apostolos Modas
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
AAML
41
48
0
19 Oct 2020
Interpretable Machine Learning -- A Brief History, State-of-the-Art and
  Challenges
Interpretable Machine Learning -- A Brief History, State-of-the-Art and Challenges
Christoph Molnar
Giuseppe Casalicchio
B. Bischl
AI4TS
AI4CE
28
396
0
19 Oct 2020
Feature Importance Ranking for Deep Learning
Feature Importance Ranking for Deep Learning
Maksymilian Wojtas
Ke Chen
147
115
0
18 Oct 2020
Altruist: Argumentative Explanations through Local Interpretations of
  Predictive Models
Altruist: Argumentative Explanations through Local Interpretations of Predictive Models
Ioannis Mollas
Nick Bassiliades
Grigorios Tsoumakas
21
13
0
15 Oct 2020
Hierarchical Text Interaction for Rating Prediction
Hierarchical Text Interaction for Rating Prediction
Jiahui Wen
Jingwei Ma
Hongkui Tu
Wei Yin
Jian Fang
27
6
0
15 Oct 2020
Interpretable Machine Learning with an Ensemble of Gradient Boosting
  Machines
Interpretable Machine Learning with an Ensemble of Gradient Boosting Machines
A. Konstantinov
Lev V. Utkin
FedML
AI4CE
12
139
0
14 Oct 2020
Previous
123...737475...858687
Next