ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1602.04938
  4. Cited By
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
v1v2v3 (latest)

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

16 February 2016
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
    FAttFaML
ArXiv (abs)PDFHTML

Papers citing ""Why Should I Trust You?": Explaining the Predictions of Any Classifier"

50 / 4,973 papers shown
Title
AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models
AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models
Eric Wallace
Jens Tuyls
Junlin Wang
Sanjay Subramanian
Matt Gardner
Sameer Singh
MILM
85
138
0
19 Sep 2019
Representation Learning for Electronic Health Records
Representation Learning for Electronic Health Records
W. Weng
Peter Szolovits
76
19
0
19 Sep 2019
InterpretML: A Unified Framework for Machine Learning Interpretability
InterpretML: A Unified Framework for Machine Learning Interpretability
Harsha Nori
Samuel Jenkins
Paul Koch
R. Caruana
AI4CE
166
490
0
19 Sep 2019
Slices of Attention in Asynchronous Video Job Interviews
Slices of Attention in Asynchronous Video Job Interviews
Léo Hemamou
G. Felhi
Jean-Claude Martin
Chloé Clavel
37
21
0
19 Sep 2019
Large-scale representation learning from visually grounded untranscribed
  speech
Large-scale representation learning from visually grounded untranscribed speech
Gabriel Ilharco
Yuan Zhang
Jason Baldridge
SSL
82
61
0
19 Sep 2019
Semantically Interpretable Activation Maps: what-where-how explanations
  within CNNs
Semantically Interpretable Activation Maps: what-where-how explanations within CNNs
Diego Marcos
Sylvain Lobry
D. Tuia
FAttMILM
59
28
0
18 Sep 2019
The Explanation Game: Explaining Machine Learning Models Using Shapley
  Values
The Explanation Game: Explaining Machine Learning Models Using Shapley Values
Luke Merrick
Ankur Taly
FAttTDI
47
33
0
17 Sep 2019
X-ToM: Explaining with Theory-of-Mind for Gaining Justified Human Trust
X-ToM: Explaining with Theory-of-Mind for Gaining Justified Human Trust
Arjun Reddy Akula
Changsong Liu
Sari Saba-Sadiya
Hongjing Lu
S. Todorovic
J. Chai
Song-Chun Zhu
62
18
0
15 Sep 2019
Co-Attentive Cross-Modal Deep Learning for Medical Evidence Synthesis
  and Decision Making
Co-Attentive Cross-Modal Deep Learning for Medical Evidence Synthesis and Decision Making
Devin Taylor
Simeon E. Spasov
Pietro Lio
31
5
0
13 Sep 2019
Shapley Interpretation and Activation in Neural Networks
Shapley Interpretation and Activation in Neural Networks
Yadong Li
Xin Cui
TDIFAttLLMSV
37
3
0
13 Sep 2019
New Perspective of Interpretability of Deep Neural Networks
New Perspective of Interpretability of Deep Neural Networks
Masanari Kimura
Masayuki Tanaka
AAMLFAttFaMLAI4CE
35
6
0
12 Sep 2019
FAT Forensics: A Python Toolbox for Algorithmic Fairness, Accountability
  and Transparency
FAT Forensics: A Python Toolbox for Algorithmic Fairness, Accountability and Transparency
Kacper Sokol
Raúl Santos-Rodríguez
Peter A. Flach
55
37
0
11 Sep 2019
Towards Safe Machine Learning for CPS: Infer Uncertainty from Training
  Data
Towards Safe Machine Learning for CPS: Infer Uncertainty from Training Data
Xiaozhe Gu
Arvind Easwaran
54
30
0
11 Sep 2019
NormLime: A New Feature Importance Metric for Explaining Deep Neural
  Networks
NormLime: A New Feature Importance Metric for Explaining Deep Neural Networks
Isaac Ahern
Adam Noack
Luis Guzman-Nateras
Dejing Dou
Boyang Albert Li
Jun Huan
FAtt
55
40
0
10 Sep 2019
Learning Fair Rule Lists
Learning Fair Rule Lists
Ulrich Aïvodji
Julien Ferry
Sébastien Gambs
Marie-José Huguet
Mohamed Siala
FaML
61
11
0
09 Sep 2019
Deep Weakly-Supervised Learning Methods for Classification and
  Localization in Histology Images: A Survey
Deep Weakly-Supervised Learning Methods for Classification and Localization in Histology Images: A Survey
Jérôme Rony
Soufiane Belharbi
Jose Dolz
Ismail Ben Ayed
Luke McCaffrey
Eric Granger
159
74
0
08 Sep 2019
Explainable Deep Learning for Video Recognition Tasks: A Framework &
  Recommendations
Explainable Deep Learning for Video Recognition Tasks: A Framework & Recommendations
Liam Hiley
Alun D. Preece
Y. Hicks
XAI
34
15
0
07 Sep 2019
Equalizing Recourse across Groups
Equalizing Recourse across Groups
Vivek Gupta
Pegah Nokhiz
Chitradeep Dutta Roy
Suresh Venkatasubramanian
FaML
65
72
0
07 Sep 2019
One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI
  Explainability Techniques
One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Vijay Arya
Rachel K. E. Bellamy
Pin-Yu Chen
Amit Dhurandhar
Michael Hind
...
Karthikeyan Shanmugam
Moninder Singh
Kush R. Varshney
Dennis L. Wei
Yunfeng Zhang
XAI
76
392
0
06 Sep 2019
DRLViz: Understanding Decisions and Memory in Deep Reinforcement
  Learning
DRLViz: Understanding Decisions and Memory in Deep Reinforcement Learning
Theo Jaunet
Romain Vuillemot
Christian Wolf
HAI
128
36
0
06 Sep 2019
Testing Deep Learning Models for Image Analysis Using Object-Relevant
  Metamorphic Relations
Testing Deep Learning Models for Image Analysis Using Object-Relevant Metamorphic Relations
Yongqiang Tian
Shiqing Ma
Ming Wen
Yepang Liu
Shing-Chi Cheung
Xinming Zhang
VLM
54
5
0
06 Sep 2019
Human-AI Collaboration in Data Science: Exploring Data Scientists'
  Perceptions of Automated AI
Human-AI Collaboration in Data Science: Exploring Data Scientists' Perceptions of Automated AI
Dakuo Wang
Justin D. Weisz
Michael J. Muller
Parikshit Ram
Werner Geyer
Casey Dugan
Y. Tausczik
Horst Samulowitz
Alexander G. Gray
228
315
0
05 Sep 2019
ALIME: Autoencoder Based Approach for Local Interpretability
ALIME: Autoencoder Based Approach for Local Interpretability
Sharath M. Shankaranarayana
D. Runje
FAtt
63
105
0
04 Sep 2019
Towards Interpretable Polyphonic Transcription with Invertible Neural
  Networks
Towards Interpretable Polyphonic Transcription with Invertible Neural Networks
Rainer Kelz
Gerhard Widmer
41
15
0
04 Sep 2019
Understanding Bias in Machine Learning
Understanding Bias in Machine Learning
Jindong Gu
Daniela Oelke
AI4CEFaML
31
22
0
02 Sep 2019
Human-grounded Evaluations of Explanation Methods for Text
  Classification
Human-grounded Evaluations of Explanation Methods for Text Classification
Piyawat Lertvittayakumjorn
Francesca Toni
FAtt
90
67
0
29 Aug 2019
Machine learning algorithms to infer trait-matching and predict species
  interactions in ecological networks
Machine learning algorithms to infer trait-matching and predict species interactions in ecological networks
Maximilian Pichler
V. Boreux
A. Klein
M. Schleuning
F. Hartig
42
101
0
26 Aug 2019
Fairness Warnings and Fair-MAML: Learning Fairly with Minimal Data
Fairness Warnings and Fair-MAML: Learning Fairly with Minimal Data
Dylan Slack
Sorelle A. Friedler
Emile Givental
FaML
115
55
0
24 Aug 2019
Fairness in Deep Learning: A Computational Perspective
Fairness in Deep Learning: A Computational Perspective
Mengnan Du
Fan Yang
Na Zou
Helen Zhou
FaMLFedML
66
234
0
23 Aug 2019
The many Shapley values for model explanation
The many Shapley values for model explanation
Mukund Sundararajan
A. Najmi
TDIFAtt
70
644
0
22 Aug 2019
Saliency Methods for Explaining Adversarial Attacks
Saliency Methods for Explaining Adversarial Attacks
Jindong Gu
Volker Tresp
FAttAAML
71
30
0
22 Aug 2019
TabNet: Attentive Interpretable Tabular Learning
TabNet: Attentive Interpretable Tabular Learning
Sercan O. Arik
Tomas Pfister
LMTD
222
1,381
0
20 Aug 2019
Fine-grained Sentiment Analysis with Faithful Attention
Fine-grained Sentiment Analysis with Faithful Attention
Ruiqi Zhong
Steven Shao
Kathleen McKeown
103
50
0
19 Aug 2019
Fairness Issues in AI Systems that Augment Sensory Abilities
Fairness Issues in AI Systems that Augment Sensory Abilities
Leah Findlater
Steven M. Goodman
Yuhang Zhao
Shiri Azenkot
Margot Hanley
40
25
0
16 Aug 2019
Tackling Algorithmic Bias in Neural-Network Classifiers using
  Wasserstein-2 Regularization
Tackling Algorithmic Bias in Neural-Network Classifiers using Wasserstein-2 Regularization
Laurent Risser
Alberto González Sanz
Quentin Vincenot
Jean-Michel Loubes
91
21
0
15 Aug 2019
Visualizing Image Content to Explain Novel Image Discovery
Visualizing Image Content to Explain Novel Image Discovery
Jake H. Lee
K. Wagstaff
29
3
0
14 Aug 2019
Requirements Engineering for Machine Learning: Perspectives from Data
  Scientists
Requirements Engineering for Machine Learning: Perspectives from Data Scientists
Andreas Vogelsang
Markus Borg
70
164
0
13 Aug 2019
Learning Credible Deep Neural Networks with Rationale Regularization
Learning Credible Deep Neural Networks with Rationale Regularization
Mengnan Du
Ninghao Liu
Fan Yang
Helen Zhou
FaML
98
46
0
13 Aug 2019
Regional Tree Regularization for Interpretability in Black Box Models
Regional Tree Regularization for Interpretability in Black Box Models
Mike Wu
S. Parbhoo
M. C. Hughes
R. Kindle
Leo Anthony Celi
Maurizio Zazzi
Volker Roth
Finale Doshi-Velez
76
38
0
13 Aug 2019
A Survey of Challenges and Opportunities in Sensing and Analytics for
  Cardiovascular Disorders
A Survey of Challenges and Opportunities in Sensing and Analytics for Cardiovascular Disorders
N. Hurley
E. Spatz
H. Krumholz
R. Jafari
B. Mortazavi
41
1
0
12 Aug 2019
LoRMIkA: Local rule-based model interpretability with k-optimal
  associations
LoRMIkA: Local rule-based model interpretability with k-optimal associations
Dilini Sewwandi Rajapaksha
Christoph Bergmeir
Wray Buntine
90
31
0
11 Aug 2019
Neural Image Compression and Explanation
Neural Image Compression and Explanation
Xiang Li
Shihao Ji
25
10
0
09 Aug 2019
Measurable Counterfactual Local Explanations for Any Classifier
Measurable Counterfactual Local Explanations for Any Classifier
Adam White
Artur Garcez
FAtt
70
98
0
08 Aug 2019
Investigating Decision Boundaries of Trained Neural Networks
Investigating Decision Boundaries of Trained Neural Networks
Roozbeh Yousefzadeh
D. O’Leary
AAML
48
22
0
07 Aug 2019
Advocacy Learning: Learning through Competition and Class-Conditional
  Representations
Advocacy Learning: Learning through Competition and Class-Conditional Representations
Ian Fox
Jenna Wiens
SSL
32
2
0
07 Aug 2019
Interpretable and Fine-Grained Visual Explanations for Convolutional
  Neural Networks
Interpretable and Fine-Grained Visual Explanations for Convolutional Neural Networks
Jörg Wagner
Jan M. Köhler
Tobias Gindele
Leon Hetzel
Thaddäus Wiedemer
Sven Behnke
AAMLFAtt
98
122
0
07 Aug 2019
Knowledge Consistency between Neural Networks and Beyond
Knowledge Consistency between Neural Networks and Beyond
Ruofan Liang
Tianlin Li
Longfei Li
Jingchao Wang
Quanshi Zhang
76
28
0
05 Aug 2019
Semi-supervised Thai Sentence Segmentation Using Local and Distant Word
  Representations
Semi-supervised Thai Sentence Segmentation Using Local and Distant Word Representations
Chanatip Saetia
Ekapol Chuangsuwanich
Tawunrat Chalothorn
P. Vateekul
74
5
0
04 Aug 2019
Smooth Grad-CAM++: An Enhanced Inference Level Visualization Technique
  for Deep Convolutional Neural Network Models
Smooth Grad-CAM++: An Enhanced Inference Level Visualization Technique for Deep Convolutional Neural Network Models
Daniel Omeiza
Skyler Speakman
C. Cintas
Komminist Weldemariam
FAtt
72
218
0
03 Aug 2019
TABOR: A Highly Accurate Approach to Inspecting and Restoring Trojan
  Backdoors in AI Systems
TABOR: A Highly Accurate Approach to Inspecting and Restoring Trojan Backdoors in AI Systems
Wenbo Guo
Lun Wang
Masashi Sugiyama
Min Du
Basel Alomair
86
230
0
02 Aug 2019
Previous
123...888990...9899100
Next