ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1602.04938
  4. Cited By
"Why Should I Trust You?": Explaining the Predictions of Any Classifier

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

16 February 2016
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
    FAtt
    FaML
ArXivPDFHTML

Papers citing ""Why Should I Trust You?": Explaining the Predictions of Any Classifier"

50 / 4,307 papers shown
Title
Counterfactual States for Atari Agents via Generative Deep Learning
Counterfactual States for Atari Agents via Generative Deep Learning
Matthew Lyle Olson
Lawrence Neal
Fuxin Li
Weng-Keen Wong
CML
21
29
0
27 Sep 2019
Hidden Stratification Causes Clinically Meaningful Failures in Machine
  Learning for Medical Imaging
Hidden Stratification Causes Clinically Meaningful Failures in Machine Learning for Medical Imaging
Luke Oakden-Rayner
Jared A. Dunnmon
G. Carneiro
Christopher Ré
OOD
38
373
0
27 Sep 2019
Interpreting Undesirable Pixels for Image Classification on Black-Box
  Models
Interpreting Undesirable Pixels for Image Classification on Black-Box Models
Sin-Han Kang
Hong G Jung
Seong-Whan Lee
FAtt
19
3
0
27 Sep 2019
Towards Explainable Artificial Intelligence
Towards Explainable Artificial Intelligence
Wojciech Samek
K. Müller
XAI
32
437
0
26 Sep 2019
Interpretable Models for Understanding Immersive Simulations
Interpretable Models for Understanding Immersive Simulations
Nicholas Hoernle
Yaákov Gal
Barbara J. Grosz
Leilah Lyons
Ada Ren
Andee Rubin
24
4
0
24 Sep 2019
Deep Convolutions for In-Depth Automated Rock Typing
Deep Convolutions for In-Depth Automated Rock Typing
E. E. Baraboshkin
L. Ismailova
D. Orlov
E. Zhukovskaya
G. Kalmykov
O. V. Khotylev
E. Baraboshkin
D. Koroteev
33
84
0
23 Sep 2019
FACE: Feasible and Actionable Counterfactual Explanations
FACE: Feasible and Actionable Counterfactual Explanations
Rafael Poyiadzi
Kacper Sokol
Raúl Santos-Rodríguez
T. D. Bie
Peter A. Flach
11
365
0
20 Sep 2019
AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models
AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models
Eric Wallace
Jens Tuyls
Junlin Wang
Sanjay Subramanian
Matt Gardner
Sameer Singh
MILM
28
137
0
19 Sep 2019
Representation Learning for Electronic Health Records
Representation Learning for Electronic Health Records
W. Weng
Peter Szolovits
36
19
0
19 Sep 2019
Slices of Attention in Asynchronous Video Job Interviews
Slices of Attention in Asynchronous Video Job Interviews
Léo Hemamou
G. Felhi
Jean-Claude Martin
Chloé Clavel
13
20
0
19 Sep 2019
Large-scale representation learning from visually grounded untranscribed
  speech
Large-scale representation learning from visually grounded untranscribed speech
Gabriel Ilharco
Yuan Zhang
Jason Baldridge
SSL
27
60
0
19 Sep 2019
Semantically Interpretable Activation Maps: what-where-how explanations
  within CNNs
Semantically Interpretable Activation Maps: what-where-how explanations within CNNs
Diego Marcos
Sylvain Lobry
D. Tuia
FAtt
MILM
25
26
0
18 Sep 2019
X-ToM: Explaining with Theory-of-Mind for Gaining Justified Human Trust
X-ToM: Explaining with Theory-of-Mind for Gaining Justified Human Trust
Arjun Reddy Akula
Changsong Liu
Sari Saba-Sadiya
Hongjing Lu
S. Todorovic
J. Chai
Song-Chun Zhu
24
18
0
15 Sep 2019
Towards Safe Machine Learning for CPS: Infer Uncertainty from Training
  Data
Towards Safe Machine Learning for CPS: Infer Uncertainty from Training Data
Xiaozhe Gu
Arvind Easwaran
13
29
0
11 Sep 2019
Deep Weakly-Supervised Learning Methods for Classification and
  Localization in Histology Images: A Survey
Deep Weakly-Supervised Learning Methods for Classification and Localization in Histology Images: A Survey
Jérôme Rony
Soufiane Belharbi
Jose Dolz
Ismail Ben Ayed
Luke McCaffrey
Eric Granger
34
70
0
08 Sep 2019
DRLViz: Understanding Decisions and Memory in Deep Reinforcement
  Learning
DRLViz: Understanding Decisions and Memory in Deep Reinforcement Learning
Theo Jaunet
Romain Vuillemot
Christian Wolf
HAI
18
36
0
06 Sep 2019
Fairness Warnings and Fair-MAML: Learning Fairly with Minimal Data
Fairness Warnings and Fair-MAML: Learning Fairly with Minimal Data
Dylan Slack
Sorelle A. Friedler
Emile Givental
FaML
32
54
0
24 Aug 2019
TabNet: Attentive Interpretable Tabular Learning
TabNet: Attentive Interpretable Tabular Learning
Sercan Ö. Arik
Tomas Pfister
LMTD
55
1,293
0
20 Aug 2019
Visualizing Image Content to Explain Novel Image Discovery
Visualizing Image Content to Explain Novel Image Discovery
Jake H. Lee
K. Wagstaff
27
3
0
14 Aug 2019
Regional Tree Regularization for Interpretability in Black Box Models
Regional Tree Regularization for Interpretability in Black Box Models
Mike Wu
S. Parbhoo
M. C. Hughes
R. Kindle
Leo Anthony Celi
Maurizio Zazzi
Volker Roth
Finale Doshi-Velez
15
37
0
13 Aug 2019
LoRMIkA: Local rule-based model interpretability with k-optimal
  associations
LoRMIkA: Local rule-based model interpretability with k-optimal associations
Dilini Sewwandi Rajapaksha
Christoph Bergmeir
Wray Buntine
35
31
0
11 Aug 2019
Advocacy Learning: Learning through Competition and Class-Conditional
  Representations
Advocacy Learning: Learning through Competition and Class-Conditional Representations
Ian Fox
Jenna Wiens
SSL
25
2
0
07 Aug 2019
NeuroMask: Explaining Predictions of Deep Neural Networks through Mask
  Learning
NeuroMask: Explaining Predictions of Deep Neural Networks through Mask Learning
M. Alzantot
Amy Widdicombe
S. Julier
Mani B. Srivastava
AAML
FAtt
26
3
0
05 Aug 2019
Smooth Grad-CAM++: An Enhanced Inference Level Visualization Technique
  for Deep Convolutional Neural Network Models
Smooth Grad-CAM++: An Enhanced Inference Level Visualization Technique for Deep Convolutional Neural Network Models
Daniel Omeiza
Skyler Speakman
C. Cintas
Komminist Weldemariam
FAtt
22
216
0
03 Aug 2019
Machine Learning at the Network Edge: A Survey
Machine Learning at the Network Edge: A Survey
M. G. Sarwar Murshed
Chris Murphy
Daqing Hou
Nazar Khan
Ganesh Ananthanarayanan
Faraz Hussain
38
378
0
31 Jul 2019
LassoNet: A Neural Network with Feature Sparsity
LassoNet: A Neural Network with Feature Sparsity
Ismael Lemhadri
Feng Ruan
L. Abraham
Robert Tibshirani
41
122
0
29 Jul 2019
Visual Interaction with Deep Learning Models through Collaborative
  Semantic Inference
Visual Interaction with Deep Learning Models through Collaborative Semantic Inference
Sebastian Gehrmann
Hendrik Strobelt
Robert Krüger
Hanspeter Pfister
Alexander M. Rush
HAI
21
57
0
24 Jul 2019
Interpretable and Steerable Sequence Learning via Prototypes
Interpretable and Steerable Sequence Learning via Prototypes
Yao Ming
Panpan Xu
Huamin Qu
Liu Ren
AI4TS
12
138
0
23 Jul 2019
Evaluating Explanation Without Ground Truth in Interpretable Machine
  Learning
Evaluating Explanation Without Ground Truth in Interpretable Machine Learning
Fan Yang
Mengnan Du
Xia Hu
XAI
ELM
32
67
0
16 Jul 2019
Technical Report: Partial Dependence through Stratification
Technical Report: Partial Dependence through Stratification
T. Parr
James D. Wilson
11
2
0
15 Jul 2019
A study on the Interpretability of Neural Retrieval Models using
  DeepSHAP
A study on the Interpretability of Neural Retrieval Models using DeepSHAP
Zeon Trevor Fernando
Jaspreet Singh
Avishek Anand
FAtt
AAML
24
68
0
15 Jul 2019
Metamorphic Testing of a Deep Learning based Forecaster
Metamorphic Testing of a Deep Learning based Forecaster
Anurag Dwarakanath
Manish Ahuja
Sanjay Podder
Silja Vinu
Arijit Naskar
M. Koushik
AI4TS
16
9
0
13 Jul 2019
A Systematic Mapping Study on Testing of Machine Learning Programs
A Systematic Mapping Study on Testing of Machine Learning Programs
S. Sherin
Muhammad Uzair Khan
Muhammad Zohaib Z. Iqbal
30
13
0
11 Jul 2019
Aerial Animal Biometrics: Individual Friesian Cattle Recovery and Visual
  Identification via an Autonomous UAV with Onboard Deep Inference
Aerial Animal Biometrics: Individual Friesian Cattle Recovery and Visual Identification via an Autonomous UAV with Onboard Deep Inference
William Andrew
C. Greatwood
T. Burghardt
22
52
0
11 Jul 2019
The What-If Tool: Interactive Probing of Machine Learning Models
The What-If Tool: Interactive Probing of Machine Learning Models
James Wexler
Mahima Pushkarna
Tolga Bolukbasi
Martin Wattenberg
F. Viégas
Jimbo Wilson
VLM
57
484
0
09 Jul 2019
On the Semantic Interpretability of Artificial Intelligence Models
On the Semantic Interpretability of Artificial Intelligence Models
V. S. Silva
André Freitas
Siegfried Handschuh
AI4CE
25
8
0
09 Jul 2019
The Price of Interpretability
The Price of Interpretability
Dimitris Bertsimas
A. Delarue
Patrick Jaillet
Sébastien Martin
23
33
0
08 Jul 2019
A Human-Grounded Evaluation of SHAP for Alert Processing
A Human-Grounded Evaluation of SHAP for Alert Processing
Hilde J. P. Weerts
Werner van Ipenburg
Mykola Pechenizkiy
FAtt
11
70
0
07 Jul 2019
Generative Counterfactual Introspection for Explainable Deep Learning
Generative Counterfactual Introspection for Explainable Deep Learning
Shusen Liu
B. Kailkhura
Donald Loveland
Yong Han
25
90
0
06 Jul 2019
Global Aggregations of Local Explanations for Black Box models
Global Aggregations of Local Explanations for Black Box models
I. V. D. Linden
H. Haned
Evangelos Kanoulas
FAtt
27
63
0
05 Jul 2019
Automating Distributed Tiered Storage Management in Cluster Computing
Automating Distributed Tiered Storage Management in Cluster Computing
H. Herodotou
E. Kakoulli
21
24
0
04 Jul 2019
Interpretable Counterfactual Explanations Guided by Prototypes
Interpretable Counterfactual Explanations Guided by Prototypes
A. V. Looveren
Janis Klaise
FAtt
29
380
0
03 Jul 2019
A Debiased MDI Feature Importance Measure for Random Forests
A Debiased MDI Feature Importance Measure for Random Forests
Xiao Li
Yu Wang
Sumanta Basu
Karl Kumbier
Bin Yu
27
83
0
26 Jun 2019
DLIME: A Deterministic Local Interpretable Model-Agnostic Explanations
  Approach for Computer-Aided Diagnosis Systems
DLIME: A Deterministic Local Interpretable Model-Agnostic Explanations Approach for Computer-Aided Diagnosis Systems
Muhammad Rehman Zafar
N. Khan
FAtt
14
153
0
24 Jun 2019
Generating Counterfactual and Contrastive Explanations using SHAP
Generating Counterfactual and Contrastive Explanations using SHAP
Shubham Rathi
24
56
0
21 Jun 2019
Machine Learning Testing: Survey, Landscapes and Horizons
Machine Learning Testing: Survey, Landscapes and Horizons
Jie M. Zhang
Mark Harman
Lei Ma
Yang Liu
VLM
AILaw
39
741
0
19 Jun 2019
Incorporating Priors with Feature Attribution on Text Classification
Incorporating Priors with Feature Attribution on Text Classification
Frederick Liu
Besim Avci
FAtt
FaML
36
120
0
19 Jun 2019
MoËT: Mixture of Expert Trees and its Application to Verifiable
  Reinforcement Learning
MoËT: Mixture of Expert Trees and its Application to Verifiable Reinforcement Learning
Marko Vasic
Andrija Petrović
Kaiyuan Wang
Mladen Nikolic
Rishabh Singh
S. Khurshid
OffRL
MoE
20
23
0
16 Jun 2019
Yoga-Veganism: Correlation Mining of Twitter Health Data
Yoga-Veganism: Correlation Mining of Twitter Health Data
Tunazzina Islam
8
22
0
15 Jun 2019
ML-LOO: Detecting Adversarial Examples with Feature Attribution
ML-LOO: Detecting Adversarial Examples with Feature Attribution
Puyudi Yang
Jianbo Chen
Cho-Jui Hsieh
Jane-ling Wang
Michael I. Jordan
AAML
22
101
0
08 Jun 2019
Previous
123...808182...858687
Next