Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1705.07874
Cited By
v1
v2 (latest)
A Unified Approach to Interpreting Model Predictions
22 May 2017
Scott M. Lundberg
Su-In Lee
FAtt
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"A Unified Approach to Interpreting Model Predictions"
50 / 3,921 papers shown
Title
OncoNetExplainer: Explainable Predictions of Cancer Types Based on Gene Expression Data
Md. Rezaul Karim
Michael Cochez
Oya Beyan
Stefan Decker
Christoph Lange
15
27
0
09 Sep 2019
Shapley Values of Reconstruction Errors of PCA for Explaining Anomaly Detection
Naoya Takeishi
FAtt
151
33
0
08 Sep 2019
When Explainability Meets Adversarial Learning: Detecting Adversarial Examples using SHAP Signatures
Gil Fidel
Ron Bitton
A. Shabtai
FAtt
GAN
63
120
0
08 Sep 2019
Explainable Deep Learning for Video Recognition Tasks: A Framework & Recommendations
Liam Hiley
Alun D. Preece
Y. Hicks
XAI
34
15
0
07 Sep 2019
Generalized Integrated Gradients: A practical method for explaining diverse ensembles
John Merrill
Geoff Ward
S. Kamkar
Jay Budzik
Douglas C. Merrill
54
15
0
04 Sep 2019
Predicting Consumer Default: A Deep Learning Approach
Stefania Albanesi
Domonkos F. Vamossy
FaML
48
60
0
30 Aug 2019
Human-grounded Evaluations of Explanation Methods for Text Classification
Piyawat Lertvittayakumjorn
Francesca Toni
FAtt
90
67
0
29 Aug 2019
Modeling infection methods of computer malware in the presence of vaccinations using epidemiological models: An analysis of real-world data
E. Yom-Tov
Nir Levy
Amir Rubin
27
7
0
26 Aug 2019
The many Shapley values for model explanation
Mukund Sundararajan
A. Najmi
TDI
FAtt
70
645
0
22 Aug 2019
Deep neural network or dermatologist?
Kyle Young
Gareth Booth
B. Simpson
R. Dutton
Sally Shrapnel
MedIm
77
69
0
19 Aug 2019
LoRMIkA: Local rule-based model interpretability with k-optimal associations
Dilini Sewwandi Rajapaksha
Christoph Bergmeir
Wray Buntine
100
31
0
11 Aug 2019
Reconstructing commuters network using machine learning and urban indicators
Gabriel Spadon
A. Carvalho
Jose F. Rodrigues-Jr
L. G. Alves
HAI
73
68
0
09 Aug 2019
Neural Image Compression and Explanation
Xiang Li
Shihao Ji
28
10
0
09 Aug 2019
Measurable Counterfactual Local Explanations for Any Classifier
Adam White
Artur Garcez
FAtt
73
98
0
08 Aug 2019
Explaining Convolutional Neural Networks using Softmax Gradient Layer-wise Relevance Propagation
Brian Kenji Iwana
Ryohei Kuroki
S. Uchida
FAtt
72
98
0
06 Aug 2019
Knowledge Consistency between Neural Networks and Beyond
Ruofan Liang
Tianlin Li
Longfei Li
Jingchao Wang
Quanshi Zhang
86
28
0
05 Aug 2019
Semi-supervised Thai Sentence Segmentation Using Local and Distant Word Representations
Chanatip Saetia
Ekapol Chuangsuwanich
Tawunrat Chalothorn
P. Vateekul
74
5
0
04 Aug 2019
Supervised and Unsupervised Neural Approaches to Text Readability
Matej Martinc
Senja Pollak
Marko Robnik-Šikonja
99
145
0
26 Jul 2019
How to Manipulate CNNs to Make Them Lie: the GradCAM Case
T. Viering
Ziqi Wang
Marco Loog
E. Eisemann
AAML
FAtt
60
28
0
25 Jul 2019
The Dangers of Post-hoc Interpretability: Unjustified Counterfactual Explanations
Thibault Laugel
Marie-Jeanne Lesot
Christophe Marsala
X. Renard
Marcin Detyniecki
105
198
0
22 Jul 2019
Scalable Topological Data Analysis and Visualization for Evaluating Data-Driven Models in Scientific Applications
Shusen Liu
Di Wang
D. Maljovec
Rushil Anirudh
Jayaraman J. Thiagarajan
...
Peter B. Robinson
H. Bhatia
Valerio Pascucci
B. Spears
P. Bremer
28
11
0
19 Jul 2019
Why Does My Model Fail? Contrastive Local Explanations for Retail Forecasting
Ana Lucic
H. Haned
Maarten de Rijke
68
64
0
17 Jul 2019
A Survey on Explainable Artificial Intelligence (XAI): Towards Medical XAI
Erico Tjoa
Cuntai Guan
XAI
170
1,466
0
17 Jul 2019
Technical Report: Partial Dependence through Stratification
T. Parr
James D. Wilson
29
3
0
15 Jul 2019
A study on the Interpretability of Neural Retrieval Models using DeepSHAP
Zeon Trevor Fernando
Jaspreet Singh
Avishek Anand
FAtt
AAML
65
68
0
15 Jul 2019
Forecasting remaining useful life: Interpretable deep learning approach via variational Bayesian inferences
Mathias Kraus
Stefan Feuerriegel
64
110
0
11 Jul 2019
Explaining an increase in predicted risk for clinical alerts
Michaela Hardt
A. Rajkomar
Gerardo Flores
Andrew M. Dai
M. Howell
Greg S. Corrado
Claire Cui
Moritz Hardt
FAtt
67
12
0
10 Jul 2019
Case-Based Reasoning for Assisting Domain Experts in Processing Fraud Alerts of Black-Box Machine Learning Models
Hilde J. P. Weerts
Werner van Ipenburg
Mykola Pechenizkiy
43
3
0
07 Jul 2019
A Human-Grounded Evaluation of SHAP for Alert Processing
Hilde J. P. Weerts
Werner van Ipenburg
Mykola Pechenizkiy
FAtt
75
70
0
07 Jul 2019
Global Aggregations of Local Explanations for Black Box models
I. V. D. Linden
H. Haned
Evangelos Kanoulas
FAtt
71
66
0
05 Jul 2019
On Validating, Repairing and Refining Heuristic ML Explanations
Alexey Ignatiev
Nina Narodytska
Sasha Rubin
FAtt
LRM
84
63
0
04 Jul 2019
Automating Distributed Tiered Storage Management in Cluster Computing
H. Herodotou
E. Kakoulli
45
25
0
04 Jul 2019
Consistent Regression using Data-Dependent Coverings
Vincent Margot
Jean-Patrick Baudry
Frédéric Guilloux
Olivier Wintenberger
68
5
0
04 Jul 2019
Interpretable Counterfactual Explanations Guided by Prototypes
A. V. Looveren
Janis Klaise
FAtt
116
389
0
03 Jul 2019
Reverse engineering recurrent networks for sentiment classification reveals line attractor dynamics
Niru Maheswaranathan
Alex H. Williams
Matthew D. Golub
Surya Ganguli
David Sussillo
81
83
0
25 Jun 2019
Explaining Deep Learning Models with Constrained Adversarial Examples
J. Moore
Nils Y. Hammerla
C. Watkins
AAML
GAN
67
38
0
25 Jun 2019
Improving performance of deep learning models with axiomatic attribution priors and expected gradients
G. Erion
Joseph D. Janizek
Pascal Sturmfels
Scott M. Lundberg
Su-In Lee
OOD
BDL
FAtt
95
82
0
25 Jun 2019
DLIME: A Deterministic Local Interpretable Model-Agnostic Explanations Approach for Computer-Aided Diagnosis Systems
Muhammad Rehman Zafar
N. Khan
FAtt
124
159
0
24 Jun 2019
Generating Counterfactual and Contrastive Explanations using SHAP
Shubham Rathi
77
57
0
21 Jun 2019
Incorporating Priors with Feature Attribution on Text Classification
Frederick Liu
Besim Avci
FAtt
FaML
108
120
0
19 Jun 2019
Explanations can be manipulated and geometry is to blame
Ann-Kathrin Dombrowski
Maximilian Alber
Christopher J. Anders
M. Ackermann
K. Müller
Pan Kessel
AAML
FAtt
92
336
0
19 Jun 2019
VizADS-B: Analyzing Sequences of ADS-B Images Using Explainable Convolutional LSTM Encoder-Decoder to Detect Cyber Attacks
Sefi Akerman
Edan Habler
A. Shabtai
82
18
0
19 Jun 2019
From Clustering to Cluster Explanations via Neural Networks
Jacob R. Kauffmann
Malte Esders
Lukas Ruff
G. Montavon
Wojciech Samek
K. Müller
79
72
0
18 Jun 2019
ASAC: Active Sensing using Actor-Critic models
Jinsung Yoon
James Jordon
M. Schaar
CML
59
16
0
16 Jun 2019
Understanding artificial intelligence ethics and safety
David Leslie
FaML
AI4TS
74
363
0
11 Jun 2019
Issues with post-hoc counterfactual explanations: a discussion
Thibault Laugel
Marie-Jeanne Lesot
Christophe Marsala
Marcin Detyniecki
CML
139
46
0
11 Jun 2019
Quantification and Analysis of Layer-wise and Pixel-wise Information Discarding
Haotian Ma
Hao Zhang
Fan Zhou
Yinqing Zhang
Quanshi Zhang
FAtt
25
0
0
10 Jun 2019
Proposed Guidelines for the Responsible Use of Explainable Machine Learning
Patrick Hall
Navdeep Gill
N. Schmidt
SILM
XAI
FaML
80
29
0
08 Jun 2019
ML-LOO: Detecting Adversarial Examples with Feature Attribution
Puyudi Yang
Jianbo Chen
Cho-Jui Hsieh
Jane-ling Wang
Michael I. Jordan
AAML
93
101
0
08 Jun 2019
Evaluating Explanation Methods for Deep Learning in Security
Alexander Warnecke
Dan Arp
Christian Wressnegger
Konrad Rieck
XAI
AAML
FAtt
71
94
0
05 Jun 2019
Previous
1
2
3
...
75
76
77
78
79
Next