Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1703.01365
Cited By
v1
v2 (latest)
Axiomatic Attribution for Deep Networks
4 March 2017
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OOD
FAtt
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Axiomatic Attribution for Deep Networks"
50 / 2,871 papers shown
Title
Uncertainty Propagation in Deep Neural Network Using Active Subspace
Weiqi Ji
Zhuyin Ren
C. Law
UQCV
31
5
0
10 Mar 2019
GNNExplainer: Generating Explanations for Graph Neural Networks
Rex Ying
Dylan Bourgeois
Jiaxuan You
Marinka Zitnik
J. Leskovec
LLMAG
163
1,339
0
10 Mar 2019
Interpreting and Understanding Graph Convolutional Neural Network using Gradient-based Attribution Method
Shangsheng Xie
Mingming Lu
FAtt
GNN
78
16
0
09 Mar 2019
Interpretable Deep Learning in Drug Discovery
Kristina Preuer
Günter Klambauer
F. Rippmann
Sepp Hochreiter
Thomas Unterthiner
83
89
0
07 Mar 2019
Adversarial Examples on Graph Data: Deep Insights into Attack and Defense
Huijun Wu
Chen Wang
Y. Tyshetskiy
Andrew Docherty
Kai Lu
Liming Zhu
AAML
GNN
38
6
0
05 Mar 2019
Aggregating explanation methods for stable and robust explainability
Laura Rieger
Lars Kai Hansen
AAML
FAtt
57
12
0
01 Mar 2019
Deep learning in bioinformatics: introduction, application, and perspective in big data era
Yu Li
Chao Huang
Lizhong Ding
Zhongxiao Li
Yijie Pan
Xin Gao
AI4CE
99
302
0
28 Feb 2019
Attention is not Explanation
Sarthak Jain
Byron C. Wallace
FAtt
159
1,332
0
26 Feb 2019
Explaining a black-box using Deep Variational Information Bottleneck Approach
Seo-Jin Bang
P. Xie
Heewook Lee
Wei Wu
Eric Xing
XAI
FAtt
77
77
0
19 Feb 2019
Regularizing Black-box Models for Improved Interpretability
Gregory Plumb
Maruan Al-Shedivat
Ángel Alexander Cabrera
Adam Perer
Eric Xing
Ameet Talwalkar
AAML
125
80
0
18 Feb 2019
Significance Tests for Neural Networks
Enguerrand Horel
K. Giesecke
57
56
0
16 Feb 2019
Why are Saliency Maps Noisy? Cause of and Solution to Noisy Saliency Maps
Beomsu Kim
Junghoon Seo
Seunghyun Jeon
Jamyoung Koo
J. Choe
Taegyun Jeon
FAtt
74
70
0
13 Feb 2019
LS-Tree: Model Interpretation When the Data Are Linguistic
Jianbo Chen
Michael I. Jordan
64
18
0
11 Feb 2019
Taking a HINT: Leveraging Explanations to Make Vision and Language Models More Grounded
Ramprasaath R. Selvaraju
Stefan Lee
Yilin Shen
Hongxia Jin
Shalini Ghosh
Larry Heck
Dhruv Batra
Devi Parikh
FAtt
VLM
76
255
0
11 Feb 2019
Global Explanations of Neural Networks: Mapping the Landscape of Predictions
Mark Ibrahim
Melissa Louie
C. Modarres
John Paisley
FAtt
97
119
0
06 Feb 2019
Neural Network Attributions: A Causal Perspective
Aditya Chattopadhyay
Piyushi Manupriya
Anirban Sarkar
V. Balasubramanian
CML
102
146
0
06 Feb 2019
Fooling Neural Network Interpretations via Adversarial Model Manipulation
Juyeon Heo
Sunghwan Joo
Taesup Moon
AAML
FAtt
129
206
0
06 Feb 2019
Understanding Impacts of High-Order Loss Approximations and Features in Deep Learning Interpretation
Sahil Singla
Eric Wallace
Shi Feng
Soheil Feizi
FAtt
71
60
0
01 Feb 2019
Interpreting Deep Neural Networks Through Variable Importance
J. Ish-Horowicz
Dana Udwin
Seth Flaxman
Sarah Filippi
Lorin Crawford
FAtt
63
14
0
28 Jan 2019
Testing Conditional Independence in Supervised Learning Algorithms
David S. Watson
Marvin N. Wright
CML
98
53
0
28 Jan 2019
On the (In)fidelity and Sensitivity for Explanations
Chih-Kuan Yeh
Cheng-Yu Hsieh
A. Suggala
David I. Inouye
Pradeep Ravikumar
FAtt
110
456
0
27 Jan 2019
Unsupervised speech representation learning using WaveNet autoencoders
J. Chorowski
Ron J. Weiss
Samy Bengio
Aaron van den Oord
SSL
76
319
0
25 Jan 2019
Learning Global Pairwise Interactions with Bayesian Neural Networks
Tianyu Cui
Pekka Marttinen
Samuel Kaski
BDL
81
17
0
24 Jan 2019
On Network Science and Mutual Information for Explaining Deep Neural Networks
Brian Davis
Umang Bhatt
Kartikeya Bhardwaj
R. Marculescu
J. M. F. Moura
FedML
SSL
FAtt
55
10
0
20 Jan 2019
Towards Aggregating Weighted Feature Attributions
Umang Bhatt
Pradeep Ravikumar
José M. F. Moura
FAtt
TDI
34
13
0
20 Jan 2019
Interpretable machine learning: definitions, methods, and applications
W. James Murdoch
Chandan Singh
Karl Kumbier
R. Abbasi-Asl
Bin Yu
XAI
HAI
211
1,459
0
14 Jan 2019
Explaining Vulnerabilities of Deep Learning to Adversarial Malware Binaries
Christian Scano
Battista Biggio
Giovanni Lagorio
Fabio Roli
A. Armando
AAML
80
131
0
11 Jan 2019
Explaining Aggregates for Exploratory Analytics
Fotis Savva
Christos Anagnostopoulos
Peter Triantafillou
63
18
0
29 Dec 2018
Feature-Wise Bias Amplification
Klas Leino
Emily Black
Matt Fredrikson
S. Sen
Anupam Datta
FaML
100
47
0
21 Dec 2018
Analysis Methods in Neural Language Processing: A Survey
Yonatan Belinkov
James R. Glass
125
558
0
21 Dec 2018
Interactive Naming for Explaining Deep Neural Networks: A Formative Study
M. Hamidi-Haines
Zhongang Qi
Alan Fern
Fuxin Li
Prasad Tadepalli
FAtt
HAI
50
11
0
18 Dec 2018
A Survey of Safety and Trustworthiness of Deep Neural Networks: Verification, Testing, Adversarial Attack and Defence, and Interpretability
Xiaowei Huang
Daniel Kroening
Wenjie Ruan
Marta Kwiatkowska
Youcheng Sun
Emese Thamo
Min Wu
Xinping Yi
AAML
132
51
0
18 Dec 2018
Can I trust you more? Model-Agnostic Hierarchical Explanations
Michael Tsang
Youbang Sun
Dongxu Ren
Yan Liu
FAtt
53
26
0
12 Dec 2018
An Empirical Study towards Understanding How Deep Convolutional Nets Recognize Falls
Yan Zhang
Heiko Neumann
76
5
0
05 Dec 2018
Interpretable Deep Learning under Fire
Xinyang Zhang
Ningfei Wang
Hua Shen
S. Ji
Xiapu Luo
Ting Wang
AAML
AI4CE
138
174
0
03 Dec 2018
Analyzing Federated Learning through an Adversarial Lens
A. Bhagoji
Supriyo Chakraborty
Prateek Mittal
S. Calo
FedML
319
1,063
0
29 Nov 2018
Using Attribution to Decode Dataset Bias in Neural Network Models for Chemistry
Kevin McCloskey
Ankur Taly
Federico Monti
M. Brenner
Lucy J. Colwell
71
86
0
27 Nov 2018
GAN Dissection: Visualizing and Understanding Generative Adversarial Networks
David Bau
Jun-Yan Zhu
Hendrik Strobelt
Bolei Zhou
J. Tenenbaum
William T. Freeman
Antonio Torralba
GAN
29
0
0
26 Nov 2018
Representer Point Selection for Explaining Deep Neural Networks
Chih-Kuan Yeh
Joon Sik Kim
Ian En-Hsu Yen
Pradeep Ravikumar
TDI
106
254
0
23 Nov 2018
On a Sparse Shortcut Topology of Artificial Neural Networks
Fenglei Fan
Dayang Wang
Hengtao Guo
Qikui Zhu
Pingkun Yan
Ge Wang
Hengyong Yu
139
22
0
22 Nov 2018
Compensated Integrated Gradients to Reliably Interpret EEG Classification
Kazuki Tachikawa
Yuji Kawai
Jihoon Park
Minoru Asada
FAtt
27
1
0
21 Nov 2018
Explain to Fix: A Framework to Interpret and Correct DNN Object Detector Predictions
Denis A. Gudovskiy
Alec Hodgkinson
Takuya Yamaguchi
Yasunori Ishii
Sotaro Tsukizawa
FAtt
74
13
0
19 Nov 2018
Towards Explainable Deep Learning for Credit Lending: A Case Study
C. Modarres
Mark Ibrahim
Melissa Louie
John Paisley
FaML
388
20
0
15 Nov 2018
Deep Q learning for fooling neural networks
Mandar M. Kulkarni
39
0
0
13 Nov 2018
What evidence does deep learning model use to classify Skin Lesions?
Xiaoxiao Li
Junyan Wu
Eric Z. Chen
Hongda Jiang
76
9
0
02 Nov 2018
Technical Note on Transcription Factor Motif Discovery from Importance Scores (TF-MoDISco) version 0.5.6.5
Avanti Shrikumar
Katherine Tian
vZiga Avsec
A. Shcherbina
Abhimanyu Banerjee
Mahfuza Sharmin
Surag Nair
A. Kundaje
52
135
0
31 Oct 2018
What can AI do for me: Evaluating Machine Learning Interpretations in Cooperative Play
Shi Feng
Jordan L. Boyd-Graber
HAI
82
130
0
23 Oct 2018
Explaining Machine Learning Models using Entropic Variable Projection
François Bachoc
Fabrice Gamboa
Max Halford
Jean-Michel Loubes
Laurent Risser
FAtt
87
5
0
18 Oct 2018
Concise Explanations of Neural Networks using Adversarial Training
P. Chalasani
Jiefeng Chen
Aravind Sadagopan
S. Jha
Xi Wu
AAML
FAtt
178
13
0
15 Oct 2018
What made you do this? Understanding black-box decisions with sufficient input subsets
Brandon Carter
Jonas W. Mueller
Siddhartha Jain
David K Gifford
FAtt
90
78
0
09 Oct 2018
Previous
1
2
3
...
55
56
57
58
Next