ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1602.04938
  4. Cited By
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
v1v2v3 (latest)

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

16 February 2016
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
    FAttFaML
ArXiv (abs)PDFHTML

Papers citing ""Why Should I Trust You?": Explaining the Predictions of Any Classifier"

50 / 4,973 papers shown
Title
Efficient computation of counterfactual explanations of LVQ models
Efficient computation of counterfactual explanations of LVQ models
André Artelt
Barbara Hammer
54
16
0
02 Aug 2019
Visualizing RNN States with Predictive Semantic Encodings
Visualizing RNN States with Predictive Semantic Encodings
Lindsey Sawatzky
Steven Bergner
F. Popowich
47
6
0
01 Aug 2019
A Survey on Deep Learning of Small Sample in Biomedical Image Analysis
A Survey on Deep Learning of Small Sample in Biomedical Image Analysis
Pengyi Zhang
Yunxin Zhong
Yulin Deng
Xiaoying Tang
Xiaoqiong Li
69
32
0
01 Aug 2019
FairSight: Visual Analytics for Fairness in Decision Making
FairSight: Visual Analytics for Fairness in Decision Making
Yongsu Ahn
Y. Lin
79
125
0
01 Aug 2019
Machine Learning at the Network Edge: A Survey
Machine Learning at the Network Edge: A Survey
M. G. Sarwar Murshed
Chris Murphy
Daqing Hou
Nazar Khan
Ganesh Ananthanarayanan
Faraz Hussain
92
391
0
31 Jul 2019
Adapting SQuaRE for Quality Assessment of Artificial Intelligence
  Systems
Adapting SQuaRE for Quality Assessment of Artificial Intelligence Systems
Hiroshi Kuwajima
Fuyuki Ishikawa
39
34
0
31 Jul 2019
What's in the box? Explaining the black-box model through an evaluation
  of its interpretable features
What's in the box? Explaining the black-box model through an evaluation of its interpretable features
F. Ventura
Tania Cerquitelli
18
2
0
31 Jul 2019
Local Interpretation Methods to Machine Learning Using the Domain of the
  Feature Space
Local Interpretation Methods to Machine Learning Using the Domain of the Feature Space
T. Botari
Rafael Izbicki
A. Carvalho
FAtt
42
12
0
31 Jul 2019
Graph Space Embedding
Graph Space Embedding
J. Pereira
A. Groen
E. Stroes
E. Levin
22
4
0
31 Jul 2019
The Challenge of Imputation in Explainable Artificial Intelligence
  Models
The Challenge of Imputation in Explainable Artificial Intelligence Models
M. Ahmad
C. Eckert
Ankur Teredesai
23
8
0
29 Jul 2019
LassoNet: A Neural Network with Feature Sparsity
LassoNet: A Neural Network with Feature Sparsity
Ismael Lemhadri
Feng Ruan
L. Abraham
Robert Tibshirani
146
131
0
29 Jul 2019
How model accuracy and explanation fidelity influence user trust
How model accuracy and explanation fidelity influence user trust
A. Papenmeier
G. Englebienne
C. Seifert
FaML
54
110
0
26 Jul 2019
Personalised novel and explainable matrix factorisation
Personalised novel and explainable matrix factorisation
Ludovik Çoba
P. Symeonidis
Markus Zanker
44
12
0
25 Jul 2019
How to Manipulate CNNs to Make Them Lie: the GradCAM Case
How to Manipulate CNNs to Make Them Lie: the GradCAM Case
T. Viering
Ziqi Wang
Marco Loog
E. Eisemann
AAMLFAtt
58
28
0
25 Jul 2019
Visual Interaction with Deep Learning Models through Collaborative
  Semantic Inference
Visual Interaction with Deep Learning Models through Collaborative Semantic Inference
Sebastian Gehrmann
Hendrik Strobelt
Robert Krüger
Hanspeter Pfister
Alexander M. Rush
HAI
101
58
0
24 Jul 2019
Interpretable and Steerable Sequence Learning via Prototypes
Interpretable and Steerable Sequence Learning via Prototypes
Yao Ming
Panpan Xu
Huamin Qu
Liu Ren
AI4TS
66
141
0
23 Jul 2019
Benchmarking Attribution Methods with Relative Feature Importance
Benchmarking Attribution Methods with Relative Feature Importance
Mengjiao Yang
Been Kim
FAttXAI
75
142
0
23 Jul 2019
The Dangers of Post-hoc Interpretability: Unjustified Counterfactual
  Explanations
The Dangers of Post-hoc Interpretability: Unjustified Counterfactual Explanations
Thibault Laugel
Marie-Jeanne Lesot
Christophe Marsala
X. Renard
Marcin Detyniecki
102
197
0
22 Jul 2019
Scalable Topological Data Analysis and Visualization for Evaluating
  Data-Driven Models in Scientific Applications
Scalable Topological Data Analysis and Visualization for Evaluating Data-Driven Models in Scientific Applications
Shusen Liu
Di Wang
D. Maljovec
Rushil Anirudh
Jayaraman J. Thiagarajan
...
Peter B. Robinson
H. Bhatia
Valerio Pascucci
B. Spears
P. Bremer
15
11
0
19 Jul 2019
Why Does My Model Fail? Contrastive Local Explanations for Retail
  Forecasting
Why Does My Model Fail? Contrastive Local Explanations for Retail Forecasting
Ana Lucic
H. Haned
Maarten de Rijke
68
64
0
17 Jul 2019
A Survey on Explainable Artificial Intelligence (XAI): Towards Medical
  XAI
A Survey on Explainable Artificial Intelligence (XAI): Towards Medical XAI
Erico Tjoa
Cuntai Guan
XAI
158
1,463
0
17 Jul 2019
Evaluating Explanation Without Ground Truth in Interpretable Machine
  Learning
Evaluating Explanation Without Ground Truth in Interpretable Machine Learning
Fan Yang
Mengnan Du
Helen Zhou
XAIELM
67
67
0
16 Jul 2019
Technical Report: Partial Dependence through Stratification
Technical Report: Partial Dependence through Stratification
T. Parr
James D. Wilson
29
3
0
15 Jul 2019
A study on the Interpretability of Neural Retrieval Models using
  DeepSHAP
A study on the Interpretability of Neural Retrieval Models using DeepSHAP
Zeon Trevor Fernando
Jaspreet Singh
Avishek Anand
FAttAAML
65
68
0
15 Jul 2019
Metamorphic Testing of a Deep Learning based Forecaster
Metamorphic Testing of a Deep Learning based Forecaster
Anurag Dwarakanath
Manish Ahuja
Sanjay Podder
Silja Vinu
Arijit Naskar
M. Koushik
AI4TS
43
9
0
13 Jul 2019
Saliency Maps Generation for Automatic Text Summarization
Saliency Maps Generation for Automatic Text Summarization
David Tuckey
Krysia Broda
A. Russo
FAtt
48
3
0
12 Jul 2019
Sparsely Activated Networks
Sparsely Activated Networks
Paschalis A. Bizopoulos
D. Koutsouris
31
12
0
12 Jul 2019
A Systematic Mapping Study on Testing of Machine Learning Programs
A Systematic Mapping Study on Testing of Machine Learning Programs
S. Sherin
Muhammad Uzair Khan
Muhammad Zohaib Z. Iqbal
39
13
0
11 Jul 2019
Aerial Animal Biometrics: Individual Friesian Cattle Recovery and Visual
  Identification via an Autonomous UAV with Onboard Deep Inference
Aerial Animal Biometrics: Individual Friesian Cattle Recovery and Visual Identification via an Autonomous UAV with Onboard Deep Inference
William Andrew
C. Greatwood
T. Burghardt
71
52
0
11 Jul 2019
Forecasting remaining useful life: Interpretable deep learning approach
  via variational Bayesian inferences
Forecasting remaining useful life: Interpretable deep learning approach via variational Bayesian inferences
Mathias Kraus
Stefan Feuerriegel
64
110
0
11 Jul 2019
Explaining an increase in predicted risk for clinical alerts
Explaining an increase in predicted risk for clinical alerts
Michaela Hardt
A. Rajkomar
Gerardo Flores
Andrew M. Dai
M. Howell
Greg S. Corrado
Claire Cui
Moritz Hardt
FAtt
67
12
0
10 Jul 2019
The What-If Tool: Interactive Probing of Machine Learning Models
The What-If Tool: Interactive Probing of Machine Learning Models
James Wexler
Mahima Pushkarna
Tolga Bolukbasi
Martin Wattenberg
F. Viégas
Jimbo Wilson
VLM
97
499
0
09 Jul 2019
Optimal Explanations of Linear Models
Optimal Explanations of Linear Models
Dimitris Bertsimas
A. Delarue
Patrick Jaillet
Sébastien Martin
FAtt
38
2
0
08 Jul 2019
The Price of Interpretability
The Price of Interpretability
Dimitris Bertsimas
A. Delarue
Patrick Jaillet
Sébastien Martin
63
34
0
08 Jul 2019
Case-Based Reasoning for Assisting Domain Experts in Processing Fraud
  Alerts of Black-Box Machine Learning Models
Case-Based Reasoning for Assisting Domain Experts in Processing Fraud Alerts of Black-Box Machine Learning Models
Hilde J. P. Weerts
Werner van Ipenburg
Mykola Pechenizkiy
35
3
0
07 Jul 2019
A Human-Grounded Evaluation of SHAP for Alert Processing
A Human-Grounded Evaluation of SHAP for Alert Processing
Hilde J. P. Weerts
Werner van Ipenburg
Mykola Pechenizkiy
FAtt
73
70
0
07 Jul 2019
Generative Counterfactual Introspection for Explainable Deep Learning
Generative Counterfactual Introspection for Explainable Deep Learning
Shusen Liu
B. Kailkhura
Donald Loveland
Yong Han
138
90
0
06 Jul 2019
Global Aggregations of Local Explanations for Black Box models
Global Aggregations of Local Explanations for Black Box models
I. V. D. Linden
H. Haned
Evangelos Kanoulas
FAtt
71
66
0
05 Jul 2019
Explaining Predictions from Tree-based Boosting Ensembles
Explaining Predictions from Tree-based Boosting Ensembles
Ana Lucic
H. Haned
Maarten de Rijke
FAtt
46
5
0
04 Jul 2019
On Validating, Repairing and Refining Heuristic ML Explanations
On Validating, Repairing and Refining Heuristic ML Explanations
Alexey Ignatiev
Nina Narodytska
Sasha Rubin
FAttLRM
76
63
0
04 Jul 2019
Automating Distributed Tiered Storage Management in Cluster Computing
Automating Distributed Tiered Storage Management in Cluster Computing
H. Herodotou
E. Kakoulli
30
25
0
04 Jul 2019
Consistent Regression using Data-Dependent Coverings
Consistent Regression using Data-Dependent Coverings
Vincent Margot
Jean-Patrick Baudry
Frédéric Guilloux
Olivier Wintenberger
36
5
0
04 Jul 2019
Machine learning and behavioral economics for personalized choice
  architecture
Machine learning and behavioral economics for personalized choice architecture
Emir Hrnjic
N. Tomczak
CMLAI4CE
56
8
0
03 Jul 2019
Interpretable Counterfactual Explanations Guided by Prototypes
Interpretable Counterfactual Explanations Guided by Prototypes
A. V. Looveren
Janis Klaise
FAtt
100
387
0
03 Jul 2019
A Case Study of Deep-Learned Activations via Hand-Crafted Audio Features
A Case Study of Deep-Learned Activations via Hand-Crafted Audio Features
Olga Slizovskaia
E. Gómez
G. Haro
19
1
0
03 Jul 2019
Towards Interpretable Deep Extreme Multi-label Learning
Towards Interpretable Deep Extreme Multi-label Learning
Yihuang Kang
I-Ling Cheng
W. Mao
Bowen Kuo
Pei-Ju Lee
18
0
0
03 Jul 2019
How we do things with words: Analyzing text as social and cultural data
How we do things with words: Analyzing text as social and cultural data
D. Nguyen
Maria Liakata
Simon DeDeo
Jacob Eisenstein
David M. Mimno
Rebekah Tromble
J. Winters
65
88
0
02 Jul 2019
On the Privacy Risks of Model Explanations
On the Privacy Risks of Model Explanations
Reza Shokri
Martin Strobel
Yair Zick
MIACVPILMSILMFAtt
126
38
0
29 Jun 2019
A Debiased MDI Feature Importance Measure for Random Forests
A Debiased MDI Feature Importance Measure for Random Forests
Xiao Li
Yu Wang
Sumanta Basu
Karl Kumbier
Bin Yu
224
86
0
26 Jun 2019
Explaining Deep Learning Models with Constrained Adversarial Examples
Explaining Deep Learning Models with Constrained Adversarial Examples
J. Moore
Nils Y. Hammerla
C. Watkins
AAMLGAN
67
38
0
25 Jun 2019
Previous
123...899091...9899100
Next