ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1602.04938
  4. Cited By
"Why Should I Trust You?": Explaining the Predictions of Any Classifier

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

16 February 2016
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
    FAtt
    FaML
ArXivPDFHTML

Papers citing ""Why Should I Trust You?": Explaining the Predictions of Any Classifier"

50 / 4,309 papers shown
Title
Evaluating and Aggregating Feature-based Model Explanations
Evaluating and Aggregating Feature-based Model Explanations
Umang Bhatt
Adrian Weller
J. M. F. Moura
XAI
38
218
0
01 May 2020
The Grammar of Interactive Explanatory Model Analysis
The Grammar of Interactive Explanatory Model Analysis
Hubert Baniecki
Dariusz Parzych
P. Biecek
24
44
0
01 May 2020
Explainable Deep Learning: A Field Guide for the Uninitiated
Explainable Deep Learning: A Field Guide for the Uninitiated
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAML
XAI
49
371
0
30 Apr 2020
Neural Additive Models: Interpretable Machine Learning with Neural Nets
Neural Additive Models: Interpretable Machine Learning with Neural Nets
Rishabh Agarwal
Levi Melnick
Nicholas Frosst
Xuezhou Zhang
Ben Lengerich
R. Caruana
Geoffrey E. Hinton
46
406
0
29 Apr 2020
An Explainable Deep Learning-based Prognostic Model for Rotating
  Machinery
An Explainable Deep Learning-based Prognostic Model for Rotating Machinery
Namkyoung Lee
M. Azarian
M. Pecht
16
14
0
28 Apr 2020
Time Series Forecasting With Deep Learning: A Survey
Time Series Forecasting With Deep Learning: A Survey
Bryan Lim
S. Zohren
AI4TS
AI4CE
59
1,192
0
28 Apr 2020
Sequential Interpretability: Methods, Applications, and Future Direction
  for Understanding Deep Learning Models in the Context of Sequential Data
Sequential Interpretability: Methods, Applications, and Future Direction for Understanding Deep Learning Models in the Context of Sequential Data
B. Shickel
Parisa Rashidi
AI4TS
33
17
0
27 Apr 2020
An Extension of LIME with Improvement of Interpretability and Fidelity
An Extension of LIME with Improvement of Interpretability and Fidelity
Sheng Shi
Yangzhou Du
Wei Fan
FAtt
16
8
0
26 Apr 2020
Adversarial Attacks and Defenses: An Interpretation Perspective
Adversarial Attacks and Defenses: An Interpretation Perspective
Ninghao Liu
Mengnan Du
Ruocheng Guo
Huan Liu
Xia Hu
AAML
31
8
0
23 Apr 2020
Human Factors in Model Interpretability: Industry Practices, Challenges,
  and Needs
Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs
Sungsoo Ray Hong
Jessica Hullman
E. Bertini
HAI
22
191
0
23 Apr 2020
Learning a Formula of Interpretability to Learn Interpretable Formulas
Learning a Formula of Interpretability to Learn Interpretable Formulas
M. Virgolin
A. D. Lorenzo
Eric Medvet
Francesca Randone
25
33
0
23 Apr 2020
Automated diagnosis of COVID-19 with limited posteroanterior chest X-ray
  images using fine-tuned deep neural networks
Automated diagnosis of COVID-19 with limited posteroanterior chest X-ray images using fine-tuned deep neural networks
Narinder Singh Punn
Sonali Agarwal
15
191
0
23 Apr 2020
Perturb More, Trap More: Understanding Behaviors of Graph Neural
  Networks
Perturb More, Trap More: Understanding Behaviors of Graph Neural Networks
Chaojie Ji
Ruxin Wang
Hongyan Wu
31
7
0
21 Apr 2020
Learning What Makes a Difference from Counterfactual Examples and
  Gradient Supervision
Learning What Makes a Difference from Counterfactual Examples and Gradient Supervision
Damien Teney
Ehsan Abbasnejad
Anton Van Den Hengel
OOD
SSL
CML
37
118
0
20 Apr 2020
How recurrent networks implement contextual processing in sentiment
  analysis
How recurrent networks implement contextual processing in sentiment analysis
Niru Maheswaranathan
David Sussillo
22
22
0
17 Apr 2020
CrossCheck: Rapid, Reproducible, and Interpretable Model Evaluation
CrossCheck: Rapid, Reproducible, and Interpretable Model Evaluation
Dustin L. Arendt
Zhuanyi Huang
Prasha Shrestha
Ellyn Ayton
M. Glenski
Svitlana Volkova
32
8
0
16 Apr 2020
Explaining Regression Based Neural Network Model
Explaining Regression Based Neural Network Model
Mégane Millan
Catherine Achard
FAtt
24
3
0
15 Apr 2020
Deep Learning Models for Multilingual Hate Speech Detection
Deep Learning Models for Multilingual Hate Speech Detection
Sai Saket Aluru
Binny Mathew
Punyajoy Saha
Animesh Mukherjee
16
148
0
14 Apr 2020
Complaint-driven Training Data Debugging for Query 2.0
Complaint-driven Training Data Debugging for Query 2.0
Weiyuan Wu
Lampros Flokas
Eugene Wu
Jiannan Wang
32
43
0
12 Apr 2020
Towards Faithfully Interpretable NLP Systems: How should we define and
  evaluate faithfulness?
Towards Faithfully Interpretable NLP Systems: How should we define and evaluate faithfulness?
Alon Jacovi
Yoav Goldberg
XAI
48
571
0
07 Apr 2020
TSInsight: A local-global attribution framework for interpretability in
  time-series data
TSInsight: A local-global attribution framework for interpretability in time-series data
Shoaib Ahmed Siddiqui
Dominique Mercier
Andreas Dengel
Sheraz Ahmed
FAtt
AI4TS
19
12
0
06 Apr 2020
Generating Hierarchical Explanations on Text Classification via Feature
  Interaction Detection
Generating Hierarchical Explanations on Text Classification via Feature Interaction Detection
Hanjie Chen
Guangtao Zheng
Yangfeng Ji
FAtt
38
92
0
04 Apr 2020
Attribution in Scale and Space
Attribution in Scale and Space
Shawn Xu
Subhashini Venugopalan
Mukund Sundararajan
FAtt
BDL
14
71
0
03 Apr 2020
Under the Hood of Neural Networks: Characterizing Learned
  Representations by Functional Neuron Populations and Network Ablations
Under the Hood of Neural Networks: Characterizing Learned Representations by Functional Neuron Populations and Network Ablations
Richard Meyes
Constantin Waubert de Puiseau
Andres Felipe Posada-Moreno
Tobias Meisen
AI4CE
30
21
0
02 Apr 2020
NBDT: Neural-Backed Decision Trees
NBDT: Neural-Backed Decision Trees
Alvin Wan
Lisa Dunlap
Daniel Ho
Jihan Yin
Scott Lee
Henry Jin
Suzanne Petryk
Sarah Adel Bargal
Joseph E. Gonzalez
26
99
0
01 Apr 2020
Ontology-based Interpretable Machine Learning for Textual Data
Ontology-based Interpretable Machine Learning for Textual Data
Phung Lai
Nhathai Phan
Han Hu
Anuja Badeti
David Newman
Dejing Dou
19
8
0
01 Apr 2020
Code Prediction by Feeding Trees to Transformers
Code Prediction by Feeding Trees to Transformers
Seohyun Kim
Jinman Zhao
Yuchi Tian
S. Chandra
48
217
0
30 Mar 2020
A Survey of Deep Learning for Scientific Discovery
A Survey of Deep Learning for Scientific Discovery
M. Raghu
Erica Schmidt
OOD
AI4CE
47
120
0
26 Mar 2020
Plausible Counterfactuals: Auditing Deep Learning Classifiers with
  Realistic Adversarial Examples
Plausible Counterfactuals: Auditing Deep Learning Classifiers with Realistic Adversarial Examples
Alejandro Barredo Arrieta
Javier Del Ser
AAML
15
22
0
25 Mar 2020
Layerwise Knowledge Extraction from Deep Convolutional Networks
Layerwise Knowledge Extraction from Deep Convolutional Networks
S. Odense
Artur Garcez
FAtt
26
9
0
19 Mar 2020
Explaining Deep Neural Networks and Beyond: A Review of Methods and
  Applications
Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications
Wojciech Samek
G. Montavon
Sebastian Lapuschkin
Christopher J. Anders
K. Müller
XAI
51
82
0
17 Mar 2020
Directions for Explainable Knowledge-Enabled Systems
Directions for Explainable Knowledge-Enabled Systems
Shruthi Chari
Daniel Gruen
Oshani Seneviratne
D. McGuinness
XAI
18
32
0
17 Mar 2020
Foundations of Explainable Knowledge-Enabled Systems
Foundations of Explainable Knowledge-Enabled Systems
Shruthi Chari
Daniel Gruen
Oshani Seneviratne
D. McGuinness
41
28
0
17 Mar 2020
Ground Truth Evaluation of Neural Network Explanations with CLEVR-XAI
Ground Truth Evaluation of Neural Network Explanations with CLEVR-XAI
L. Arras
Ahmed Osman
Wojciech Samek
XAI
AAML
21
150
0
16 Mar 2020
GAMI-Net: An Explainable Neural Network based on Generalized Additive
  Models with Structured Interactions
GAMI-Net: An Explainable Neural Network based on Generalized Additive Models with Structured Interactions
Zebin Yang
Aijun Zhang
Agus Sudjianto
FAtt
19
126
0
16 Mar 2020
Self-Supervised Discovering of Interpretable Features for Reinforcement
  Learning
Self-Supervised Discovering of Interpretable Features for Reinforcement Learning
Wenjie Shi
Gao Huang
Shiji Song
Zhuoyuan Wang
Tingyu Lin
Cheng Wu
SSL
28
18
0
16 Mar 2020
Model Agnostic Multilevel Explanations
Model Agnostic Multilevel Explanations
Karthikeyan N. Ramamurthy
B. Vinzamuri
Yunfeng Zhang
Amit Dhurandhar
29
41
0
12 Mar 2020
xCos: An Explainable Cosine Metric for Face Verification Task
xCos: An Explainable Cosine Metric for Face Verification Task
Yu-sheng Lin
Zhe-Yu Liu
Yu-An Chen
Yu-Siang Wang
Ya-Liang Chang
Winston H. Hsu
CVBM
33
46
0
11 Mar 2020
Fairness by Explicability and Adversarial SHAP Learning
Fairness by Explicability and Adversarial SHAP Learning
James M. Hickey
Pietro G. Di Stefano
V. Vasileiou
FAtt
FedML
33
19
0
11 Mar 2020
Machine Learning for Intelligent Optical Networks: A Comprehensive
  Survey
Machine Learning for Intelligent Optical Networks: A Comprehensive Survey
Rentao Gu
Zeyuan Yang
Yuefeng Ji
27
109
0
11 Mar 2020
IROF: a low resource evaluation metric for explanation methods
IROF: a low resource evaluation metric for explanation methods
Laura Rieger
Lars Kai Hansen
28
55
0
09 Mar 2020
Causal Interpretability for Machine Learning -- Problems, Methods and
  Evaluation
Causal Interpretability for Machine Learning -- Problems, Methods and Evaluation
Raha Moraffah
Mansooreh Karami
Ruocheng Guo
A. Raglin
Huan Liu
CML
ELM
XAI
29
213
0
09 Mar 2020
Link Prediction using Graph Neural Networks for Master Data Management
Link Prediction using Graph Neural Networks for Master Data Management
Balaji Ganesan
Srinivas Parkala
Neeraj R Singh
Sumit Bhatia
Gayatri Mishra
Matheen Ahmed Pasha
Hima Patel
Somashekar Naganna
AI4CE
35
11
0
07 Mar 2020
MAB-Malware: A Reinforcement Learning Framework for Attacking Static
  Malware Classifiers
MAB-Malware: A Reinforcement Learning Framework for Attacking Static Malware Classifiers
Wei Song
Xuezixiang Li
Sadia Afroz
D. Garg
Dmitry Kuznetsov
Heng Yin
AAML
55
27
0
06 Mar 2020
What went wrong and when? Instance-wise Feature Importance for
  Time-series Models
What went wrong and when? Instance-wise Feature Importance for Time-series Models
S. Tonekaboni
Shalmali Joshi
Kieran Campbell
David Duvenaud
Anna Goldenberg
FAtt
OOD
AI4TS
56
14
0
05 Mar 2020
ViCE: Visual Counterfactual Explanations for Machine Learning Models
ViCE: Visual Counterfactual Explanations for Machine Learning Models
Oscar Gomez
Steffen Holter
Jun Yuan
E. Bertini
AAML
59
93
0
05 Mar 2020
EXPLAIN-IT: Towards Explainable AI for Unsupervised Network Traffic
  Analysis
EXPLAIN-IT: Towards Explainable AI for Unsupervised Network Traffic Analysis
Andrea Morichetta
P. Casas
Marco Mellia
15
55
0
03 Mar 2020
Two Decades of AI4NETS-AI/ML for Data Networks: Challenges & Research
  Directions
Two Decades of AI4NETS-AI/ML for Data Networks: Challenges & Research Directions
P. Casas
GNN
19
8
0
03 Mar 2020
Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve
  Adversarial Robustness
Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustness
Ahmadreza Jeddi
M. Shafiee
Michelle Karg
C. Scharfenberger
A. Wong
OOD
AAML
72
63
0
02 Mar 2020
A Study on Multimodal and Interactive Explanations for Visual Question
  Answering
A Study on Multimodal and Interactive Explanations for Visual Question Answering
Kamran Alipour
J. Schulze
Yi Yao
Avi Ziskind
Giedrius Burachas
32
27
0
01 Mar 2020
Previous
123...777879...858687
Next