Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1703.01365
Cited By
v1
v2 (latest)
Axiomatic Attribution for Deep Networks
4 March 2017
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OOD
FAtt
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Axiomatic Attribution for Deep Networks"
50 / 2,871 papers shown
Title
Unsupervised Representation Learning of DNA Sequences
Vishal Agarwal
N. Reddy
Ashish Anand
BDL
SSL
DRL
30
11
0
07 Jun 2019
XRAI: Better Attributions Through Regions
A. Kapishnikov
Tolga Bolukbasi
Fernanda Viégas
Michael Terry
FAtt
XAI
74
213
0
06 Jun 2019
Evaluating Explanation Methods for Deep Learning in Security
Alexander Warnecke
Dan Arp
Christian Wressnegger
Konrad Rieck
XAI
AAML
FAtt
71
94
0
05 Jun 2019
c-Eval: A Unified Metric to Evaluate Feature-based Explanations via Perturbation
Minh Nhat Vu
Truc D. T. Nguyen
Nhathai Phan
Ralucca Gera
My T. Thai
AAML
FAtt
79
22
0
05 Jun 2019
Interpretable and Differentially Private Predictions
Frederik Harder
Matthias Bauer
Mijung Park
FAtt
74
53
0
05 Jun 2019
Adversarial Robustness as a Prior for Learned Representations
Logan Engstrom
Andrew Ilyas
Shibani Santurkar
Dimitris Tsipras
Brandon Tran
Aleksander Madry
OOD
AAML
95
63
0
03 Jun 2019
Explainability Techniques for Graph Convolutional Networks
Federico Baldassarre
Hossein Azizpour
GNN
FAtt
178
272
0
31 May 2019
Certifiably Robust Interpretation in Deep Learning
Alexander Levine
Sahil Singla
Soheil Feizi
FAtt
AAML
93
65
0
28 May 2019
A Cross-Domain Transferable Neural Coherence Model
Peng Xu
H. Saghir
Jin Sung Kang
Teng Long
A. Bose
Yanshuai Cao
Jackie C.K. Cheung
109
46
0
28 May 2019
Deep Learning for Bug-Localization in Student Programs
Rahul Gupta
Aditya Kanade
S. Shevade
31
4
0
28 May 2019
EDUCE: Explaining model Decisions through Unsupervised Concepts Extraction
Diane Bouchacourt
Ludovic Denoyer
FAtt
74
21
0
28 May 2019
Towards Interpretable Sparse Graph Representation Learning with Laplacian Pooling
Emmanuel Noutahi
Dominique Beaini
Julien Horwood
Sébastien Giguère
Prudencio Tossou
AI4CE
177
34
0
28 May 2019
A Simple Saliency Method That Passes the Sanity Checks
Arushi Gupta
Sanjeev Arora
AAML
XAI
FAtt
28
11
0
27 May 2019
Structure Learning for Neural Module Networks
Vardaan Pahuja
Jie Fu
Sarath Chandar
C. Pal
69
7
0
27 May 2019
Analyzing the Interpretability Robustness of Self-Explaining Models
Haizhong Zheng
Earlence Fernandes
A. Prakash
AAML
LRM
76
7
0
27 May 2019
Rearchitecting Classification Frameworks For Increased Robustness
Varun Chandrasekaran
Brian Tang
Nicolas Papernot
Kassem Fawaz
S. Jha
Xi Wu
AAML
OOD
100
8
0
26 May 2019
Robust Attribution Regularization
Jiefeng Chen
Xi Wu
Vaibhav Rastogi
Yingyu Liang
S. Jha
OOD
59
83
0
23 May 2019
Interpreting a Recurrent Neural Network's Predictions of ICU Mortality Risk
L. Ho
M. Aczon
D. Ledbetter
R. Wetzel
35
3
0
23 May 2019
Computationally Efficient Feature Significance and Importance for Machine Learning Models
Enguerrand Horel
K. Giesecke
FAtt
55
9
0
23 May 2019
Interpreting Adversarially Trained Convolutional Neural Networks
Tianyuan Zhang
Zhanxing Zhu
AAML
GAN
FAtt
125
161
0
23 May 2019
What Do Adversarially Robust Models Look At?
Takahiro Itazuri
Yoshihiro Fukuhara
Hirokatsu Kataoka
Shigeo Morishima
32
5
0
19 May 2019
How Case Based Reasoning Explained Neural Networks: An XAI Survey of Post-Hoc Explanation-by-Example in ANN-CBR Twins
Mark T. Keane
Eoin M. Kenny
130
81
0
17 May 2019
Consensus-based Interpretable Deep Neural Networks with Application to Mortality Prediction
Shaeke Salman
S. N. Payrovnaziri
Xiuwen Liu
Pablo Rengifo-Moreno
Zhe He
26
0
0
14 May 2019
What Clinicians Want: Contextualizing Explainable Machine Learning for Clinical End Use
S. Tonekaboni
Shalmali Joshi
M. Mccradden
Anna Goldenberg
100
403
0
13 May 2019
Explainable AI for Trees: From Local Explanations to Global Understanding
Scott M. Lundberg
G. Erion
Hugh Chen
A. DeGrave
J. Prutkin
B. Nair
R. Katz
J. Himmelfarb
N. Bansal
Su-In Lee
FAtt
122
291
0
11 May 2019
On the Connection Between Adversarial Robustness and Saliency Map Interpretability
Christian Etmann
Sebastian Lunz
Peter Maass
Carola-Bibiane Schönlieb
AAML
FAtt
63
162
0
10 May 2019
Unsupervised Detection of Distinctive Regions on 3D Shapes
Xianzhi Li
Lequan Yu
Chi-Wing Fu
Daniel Cohen-Or
Pheng-Ann Heng
3DPC
95
20
0
05 May 2019
Temporal Graph Convolutional Networks for Automatic Seizure Detection
Ian Covert
B. Krishnan
I. Najm
Jiening Zhan
Matthew Shore
J. Hixson
M. Po
60
71
0
03 May 2019
Visualizing Deep Networks by Optimizing with Integrated Gradients
Zhongang Qi
Saeed Khorram
Fuxin Li
FAtt
83
126
0
02 May 2019
Full-Gradient Representation for Neural Network Visualization
Suraj Srinivas
François Fleuret
MILM
FAtt
120
278
0
02 May 2019
"Why Should You Trust My Explanation?" Understanding Uncertainty in LIME Explanations
Hui Fen Tan
Kuangyan Song
Yiming Sun
Yujia Zhang
Madeilene Udell
FAtt
122
19
0
29 Apr 2019
Property Inference for Deep Neural Networks
D. Gopinath
Hayes Converse
C. Păsăreanu
Ankur Taly
74
8
0
29 Apr 2019
Evaluating Recurrent Neural Network Explanations
L. Arras
Ahmed Osman
K. Müller
Wojciech Samek
XAI
FAtt
117
88
0
26 Apr 2019
Explaining a prediction in some nonlinear models
Cosimo Izzo
FAtt
41
0
0
21 Apr 2019
Uncovering convolutional neural network decisions for diagnosing multiple sclerosis on conventional MRI using layer-wise relevance propagation
Fabian Eitel
Emily Soehler
J. Bellmann-Strobl
A. Brandt
K. Ruprecht
...
M. Weygandt
J. Haynes
M. Scheel
Friedemann Paul
K. Ritter
82
134
0
18 Apr 2019
Generating Minimal Adversarial Perturbations with Integrated Adaptive Gradients
Yatie Xiao
Chi-Man Pun
AAML
GAN
TTA
21
0
0
12 Apr 2019
Software and application patterns for explanation methods
Maximilian Alber
80
11
0
09 Apr 2019
Diabetes Mellitus Forecasting Using Population Health Data in Ontario, Canada
Mathieu Ravaut
Hamed Sadeghi
Kin Kwan Leung
M. Volkovs
L. Rosella
OOD
32
6
0
08 Apr 2019
Visualization of Convolutional Neural Networks for Monocular Depth Estimation
Junjie Hu
Yan Zhang
Takayuki Okatani
MDE
127
83
0
06 Apr 2019
Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations
Fred Hohman
Haekyu Park
Caleb Robinson
Duen Horng Chau
FAtt
3DH
HAI
99
218
0
04 Apr 2019
Finding and Visualizing Weaknesses of Deep Reinforcement Learning Agents
Christian Rupprecht
Cyril Ibrahim
C. Pal
96
32
0
02 Apr 2019
Relative Attributing Propagation: Interpreting the Comparative Contributions of Individual Units in Deep Neural Networks
Woo-Jeoung Nam
Shir Gur
Jaesik Choi
Lior Wolf
Seong-Whan Lee
FAtt
83
99
0
01 Apr 2019
Interpreting Black Box Models via Hypothesis Testing
Collin Burns
Jesse Thomason
Wesley Tansey
FAtt
80
9
0
29 Mar 2019
Bridging Adversarial Robustness and Gradient Interpretability
Beomsu Kim
Junghoon Seo
Taegyun Jeon
AAML
84
40
0
27 Mar 2019
On Attribution of Recurrent Neural Network Predictions via Additive Decomposition
Mengnan Du
Ninghao Liu
Fan Yang
Shuiwang Ji
Helen Zhou
FAtt
71
51
0
27 Mar 2019
Explaining Deep Neural Networks with a Polynomial Time Algorithm for Shapley Values Approximation
Marco Ancona
Cengiz Öztireli
Markus Gross
FAtt
TDI
121
230
0
26 Mar 2019
Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet
Wieland Brendel
Matthias Bethge
SSL
FAtt
207
562
0
20 Mar 2019
NeuralHydrology -- Interpreting LSTMs in Hydrology
Frederik Kratzert
M. Herrnegger
D. Klotz
Sepp Hochreiter
Günter Klambauer
60
86
0
19 Mar 2019
Attribution-driven Causal Analysis for Detection of Adversarial Examples
Susmit Jha
Sunny Raj
S. Fernandes
Sumit Kumar Jha
S. Jha
Gunjan Verma
B. Jalaeian
A. Swami
AAML
75
17
0
14 Mar 2019
Activation Analysis of a Byte-Based Deep Neural Network for Malware Classification
Scott E. Coull
Christopher Gardner
64
51
0
12 Mar 2019
Previous
1
2
3
...
54
55
56
57
58
Next