Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1705.07874
Cited By
v1
v2 (latest)
A Unified Approach to Interpreting Model Predictions
22 May 2017
Scott M. Lundberg
Su-In Lee
FAtt
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"A Unified Approach to Interpreting Model Predictions"
50 / 3,916 papers shown
Title
Exploring Interpretable LSTM Neural Networks over Multi-Variable Data
Tian Guo
Tao R. Lin
Nino Antulov-Fantulin
AI4TS
94
156
0
28 May 2019
Infusing domain knowledge in AI-based "black box" models for better explainability with application in bankruptcy prediction
Sheikh Rabiul Islam
W. Eberle
Sid Bundy
S. Ghafoor
MLAU
69
23
0
27 May 2019
Machine Learning Estimation of Heterogeneous Treatment Effects with Instruments
Vasilis Syrgkanis
Victor Lei
Miruna Oprescu
Maggie Hei
Keith Battocchi
Greg Lewis
CML
50
73
0
24 May 2019
Computationally Efficient Feature Significance and Importance for Machine Learning Models
Enguerrand Horel
K. Giesecke
FAtt
55
9
0
23 May 2019
Explaining Machine Learning Classifiers through Diverse Counterfactual Explanations
R. Mothilal
Amit Sharma
Chenhao Tan
CML
140
1,033
0
19 May 2019
Disentangled Attribution Curves for Interpreting Random Forests and Boosted Trees
Summer Devlin
Chandan Singh
W. James Murdoch
Bin Yu
FAtt
62
14
0
18 May 2019
From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices
Jessica Morley
Luciano Floridi
Libby Kinsey
Anat Elhalal
83
57
0
15 May 2019
Learning Policies from Self-Play with Policy Gradients and MCTS Value Estimates
Dennis J. N. J. Soemers
Éric Piette
Matthew Stephenson
C. Browne
62
8
0
14 May 2019
Modelling urban networks using Variational Autoencoders
Kira Kempinska
R. Murcio
GNN
42
39
0
14 May 2019
Explainable AI for Trees: From Local Explanations to Global Understanding
Scott M. Lundberg
G. Erion
Hugh Chen
A. DeGrave
J. Prutkin
B. Nair
R. Katz
J. Himmelfarb
N. Bansal
Su-In Lee
FAtt
113
291
0
11 May 2019
Interpret Federated Learning with Shapley Values
Guan Wang
FAtt
FedML
71
92
0
11 May 2019
Machine learning-guided synthesis of advanced inorganic materials
Bijun Tang
Yuhao Lu
Jiadong Zhou
Han Wang
Prafful Golani
Manzhang Xu
Quan Xu
Cuntai Guan
Zheng Liu
AI4CE
31
99
0
10 May 2019
Visualizing Deep Networks by Optimizing with Integrated Gradients
Zhongang Qi
Saeed Khorram
Fuxin Li
FAtt
83
126
0
02 May 2019
Full-Gradient Representation for Neural Network Visualization
Suraj Srinivas
François Fleuret
MILM
FAtt
120
278
0
02 May 2019
Unrestricted Permutation forces Extrapolation: Variable Importance Requires at least One More Model, or There Is No Free Variable Importance
Giles Hooker
L. Mentch
Siyu Zhou
93
159
0
01 May 2019
To believe or not to believe: Validating explanation fidelity for dynamic malware analysis
Li-Wei Chen
Carter Yagemann
Evan Downing
AAML
FAtt
56
3
0
30 Apr 2019
Evaluating Recurrent Neural Network Explanations
L. Arras
Ahmed Osman
K. Müller
Wojciech Samek
XAI
FAtt
117
88
0
26 Apr 2019
Explaining a prediction in some nonlinear models
Cosimo Izzo
FAtt
41
0
0
21 Apr 2019
Explainability in Human-Agent Systems
A. Rosenfeld
A. Richardson
XAI
83
207
0
17 Apr 2019
Enhancing Time Series Momentum Strategies Using Deep Neural Networks
Bryan Lim
S. Zohren
Stephen J. Roberts
AIFin
AI4TS
72
90
0
09 Apr 2019
Software and application patterns for explanation methods
Maximilian Alber
80
11
0
09 Apr 2019
Sampling, Intervention, Prediction, Aggregation: A Generalized Framework for Model-Agnostic Interpretations
Christian A. Scholbeck
Christoph Molnar
C. Heumann
B. Bischl
Giuseppe Casalicchio
108
27
0
08 Apr 2019
Data Shapley: Equitable Valuation of Data for Machine Learning
Amirata Ghorbani
James Zou
TDI
FedML
142
796
0
05 Apr 2019
Relative Attributing Propagation: Interpreting the Comparative Contributions of Individual Units in Deep Neural Networks
Woo-Jeoung Nam
Shir Gur
Jaesik Choi
Lior Wolf
Seong-Whan Lee
FAtt
83
99
0
01 Apr 2019
Interpreting Black Box Models via Hypothesis Testing
Collin Burns
Jesse Thomason
Wesley Tansey
FAtt
80
9
0
29 Mar 2019
Do Not Trust Additive Explanations
Alicja Gosiewska
P. Biecek
73
42
0
27 Mar 2019
Explaining Deep Neural Networks with a Polynomial Time Algorithm for Shapley Values Approximation
Marco Ancona
Cengiz Öztireli
Markus Gross
FAtt
TDI
121
230
0
26 Mar 2019
Explaining individual predictions when features are dependent: More accurate approximations to Shapley values
K. Aas
Martin Jullum
Anders Løland
FAtt
TDI
90
635
0
25 Mar 2019
Activation Analysis of a Byte-Based Deep Neural Network for Malware Classification
Scott E. Coull
Christopher Gardner
64
51
0
12 Mar 2019
GNNExplainer: Generating Explanations for Graph Neural Networks
Rex Ying
Dylan Bourgeois
Jiaxuan You
Marinka Zitnik
J. Leskovec
LLMAG
163
1,339
0
10 Mar 2019
A Grid-based Method for Removing Overlaps of Dimensionality Reduction Scatterplot Layouts
Gladys M. H. Hilasaca
Wilson E. Marcílio-Jr
D. M. Eler
Rafael M. Martins
F. Paulovich
54
9
0
08 Mar 2019
Explaining Anomalies Detected by Autoencoders Using SHAP
Liat Antwarg
Ronnie Mindlin Miller
Bracha Shapira
Lior Rokach
FAtt
TDI
77
86
0
06 Mar 2019
Towards Efficient Data Valuation Based on the Shapley Value
R. Jia
David Dao
Wei Ping
F. Hubis
Nicholas Hynes
Nezihe Merve Gürel
Yue Liu
Ce Zhang
Basel Alomair
C. Spanos
TDI
118
426
0
27 Feb 2019
Forecasting intracranial hypertension using multi-scale waveform metrics
Matthias Huser
A. Kündig
W. Karlen
V. D. Luca
Martin Jaggi
13
18
0
25 Feb 2019
Explaining a black-box using Deep Variational Information Bottleneck Approach
Seo-Jin Bang
P. Xie
Heewook Lee
Wei Wu
Eric Xing
XAI
FAtt
77
77
0
19 Feb 2019
Regularizing Black-box Models for Improved Interpretability
Gregory Plumb
Maruan Al-Shedivat
Ángel Alexander Cabrera
Adam Perer
Eric Xing
Ameet Talwalkar
AAML
125
80
0
18 Feb 2019
STRIP: A Defence Against Trojan Attacks on Deep Neural Networks
Yansong Gao
Chang Xu
Derui Wang
Shiping Chen
Damith C. Ranasinghe
Surya Nepal
AAML
98
821
0
18 Feb 2019
LS-Tree: Model Interpretation When the Data Are Linguistic
Jianbo Chen
Michael I. Jordan
64
18
0
11 Feb 2019
Global Explanations of Neural Networks: Mapping the Landscape of Predictions
Mark Ibrahim
Melissa Louie
C. Modarres
John Paisley
FAtt
97
119
0
06 Feb 2019
Fooling Neural Network Interpretations via Adversarial Model Manipulation
Juyeon Heo
Sunghwan Joo
Taesup Moon
AAML
FAtt
129
206
0
06 Feb 2019
Fairwashing: the risk of rationalization
Ulrich Aïvodji
Hiromi Arai
O. Fortineau
Sébastien Gambs
Satoshi Hara
Alain Tapp
FaML
70
147
0
28 Jan 2019
Testing Conditional Independence in Supervised Learning Algorithms
David S. Watson
Marvin N. Wright
CML
98
53
0
28 Jan 2019
On the (In)fidelity and Sensitivity for Explanations
Chih-Kuan Yeh
Cheng-Yu Hsieh
A. Suggala
David I. Inouye
Pradeep Ravikumar
FAtt
110
456
0
27 Jan 2019
ISeeU: Visually interpretable deep learning for mortality prediction inside the ICU
William Caicedo-Torres
Jairo Gutiérrez
41
82
0
24 Jan 2019
On Network Science and Mutual Information for Explaining Deep Neural Networks
Brian Davis
Umang Bhatt
Kartikeya Bhardwaj
R. Marculescu
J. M. F. Moura
FedML
SSL
FAtt
55
10
0
20 Jan 2019
Quantifying Interpretability and Trust in Machine Learning Systems
Philipp Schmidt
F. Biessmann
56
115
0
20 Jan 2019
Towards Aggregating Weighted Feature Attributions
Umang Bhatt
Pradeep Ravikumar
José M. F. Moura
FAtt
TDI
34
13
0
20 Jan 2019
Interpretable machine learning: definitions, methods, and applications
W. James Murdoch
Chandan Singh
Karl Kumbier
R. Abbasi-Asl
Bin Yu
XAI
HAI
211
1,459
0
14 Jan 2019
Interpretable CNNs for Object Classification
Quanshi Zhang
Xin Eric Wang
Ying Nian Wu
Huilin Zhou
Song-Chun Zhu
61
54
0
08 Jan 2019
Explaining AlphaGo: Interpreting Contextual Effects in Neural Networks
Zenan Ling
Haotian Ma
Yu Yang
Robert C. Qiu
Song-Chun Zhu
Quanshi Zhang
MILM
36
3
0
08 Jan 2019
Previous
1
2
3
...
76
77
78
79
Next