ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1703.01365
  4. Cited By
Axiomatic Attribution for Deep Networks
v1v2 (latest)

Axiomatic Attribution for Deep Networks

4 March 2017
Mukund Sundararajan
Ankur Taly
Qiqi Yan
    OODFAtt
ArXiv (abs)PDFHTML

Papers citing "Axiomatic Attribution for Deep Networks"

50 / 2,871 papers shown
Title
Unsupervised Representation Learning of DNA Sequences
Unsupervised Representation Learning of DNA Sequences
Vishal Agarwal
N. Reddy
Ashish Anand
BDLSSLDRL
30
11
0
07 Jun 2019
XRAI: Better Attributions Through Regions
XRAI: Better Attributions Through Regions
A. Kapishnikov
Tolga Bolukbasi
Fernanda Viégas
Michael Terry
FAttXAI
74
213
0
06 Jun 2019
Evaluating Explanation Methods for Deep Learning in Security
Evaluating Explanation Methods for Deep Learning in Security
Alexander Warnecke
Dan Arp
Christian Wressnegger
Konrad Rieck
XAIAAMLFAtt
71
94
0
05 Jun 2019
c-Eval: A Unified Metric to Evaluate Feature-based Explanations via
  Perturbation
c-Eval: A Unified Metric to Evaluate Feature-based Explanations via Perturbation
Minh Nhat Vu
Truc D. T. Nguyen
Nhathai Phan
Ralucca Gera
My T. Thai
AAMLFAtt
79
22
0
05 Jun 2019
Interpretable and Differentially Private Predictions
Interpretable and Differentially Private Predictions
Frederik Harder
Matthias Bauer
Mijung Park
FAtt
74
53
0
05 Jun 2019
Adversarial Robustness as a Prior for Learned Representations
Adversarial Robustness as a Prior for Learned Representations
Logan Engstrom
Andrew Ilyas
Shibani Santurkar
Dimitris Tsipras
Brandon Tran
Aleksander Madry
OODAAML
95
63
0
03 Jun 2019
Explainability Techniques for Graph Convolutional Networks
Explainability Techniques for Graph Convolutional Networks
Federico Baldassarre
Hossein Azizpour
GNNFAtt
178
272
0
31 May 2019
Certifiably Robust Interpretation in Deep Learning
Certifiably Robust Interpretation in Deep Learning
Alexander Levine
Sahil Singla
Soheil Feizi
FAttAAML
93
65
0
28 May 2019
A Cross-Domain Transferable Neural Coherence Model
A Cross-Domain Transferable Neural Coherence Model
Peng Xu
H. Saghir
Jin Sung Kang
Teng Long
A. Bose
Yanshuai Cao
Jackie C.K. Cheung
109
46
0
28 May 2019
Deep Learning for Bug-Localization in Student Programs
Deep Learning for Bug-Localization in Student Programs
Rahul Gupta
Aditya Kanade
S. Shevade
31
4
0
28 May 2019
EDUCE: Explaining model Decisions through Unsupervised Concepts
  Extraction
EDUCE: Explaining model Decisions through Unsupervised Concepts Extraction
Diane Bouchacourt
Ludovic Denoyer
FAtt
74
21
0
28 May 2019
Towards Interpretable Sparse Graph Representation Learning with
  Laplacian Pooling
Towards Interpretable Sparse Graph Representation Learning with Laplacian Pooling
Emmanuel Noutahi
Dominique Beaini
Julien Horwood
Sébastien Giguère
Prudencio Tossou
AI4CE
177
34
0
28 May 2019
A Simple Saliency Method That Passes the Sanity Checks
A Simple Saliency Method That Passes the Sanity Checks
Arushi Gupta
Sanjeev Arora
AAMLXAIFAtt
28
11
0
27 May 2019
Structure Learning for Neural Module Networks
Structure Learning for Neural Module Networks
Vardaan Pahuja
Jie Fu
Sarath Chandar
C. Pal
69
7
0
27 May 2019
Analyzing the Interpretability Robustness of Self-Explaining Models
Analyzing the Interpretability Robustness of Self-Explaining Models
Haizhong Zheng
Earlence Fernandes
A. Prakash
AAMLLRM
76
7
0
27 May 2019
Rearchitecting Classification Frameworks For Increased Robustness
Rearchitecting Classification Frameworks For Increased Robustness
Varun Chandrasekaran
Brian Tang
Nicolas Papernot
Kassem Fawaz
S. Jha
Xi Wu
AAMLOOD
100
8
0
26 May 2019
Robust Attribution Regularization
Robust Attribution Regularization
Jiefeng Chen
Xi Wu
Vaibhav Rastogi
Yingyu Liang
S. Jha
OOD
59
83
0
23 May 2019
Interpreting a Recurrent Neural Network's Predictions of ICU Mortality
  Risk
Interpreting a Recurrent Neural Network's Predictions of ICU Mortality Risk
L. Ho
M. Aczon
D. Ledbetter
R. Wetzel
35
3
0
23 May 2019
Computationally Efficient Feature Significance and Importance for
  Machine Learning Models
Computationally Efficient Feature Significance and Importance for Machine Learning Models
Enguerrand Horel
K. Giesecke
FAtt
55
9
0
23 May 2019
Interpreting Adversarially Trained Convolutional Neural Networks
Interpreting Adversarially Trained Convolutional Neural Networks
Tianyuan Zhang
Zhanxing Zhu
AAMLGANFAtt
125
161
0
23 May 2019
What Do Adversarially Robust Models Look At?
What Do Adversarially Robust Models Look At?
Takahiro Itazuri
Yoshihiro Fukuhara
Hirokatsu Kataoka
Shigeo Morishima
32
5
0
19 May 2019
How Case Based Reasoning Explained Neural Networks: An XAI Survey of
  Post-Hoc Explanation-by-Example in ANN-CBR Twins
How Case Based Reasoning Explained Neural Networks: An XAI Survey of Post-Hoc Explanation-by-Example in ANN-CBR Twins
Mark T. Keane
Eoin M. Kenny
130
81
0
17 May 2019
Consensus-based Interpretable Deep Neural Networks with Application to
  Mortality Prediction
Consensus-based Interpretable Deep Neural Networks with Application to Mortality Prediction
Shaeke Salman
S. N. Payrovnaziri
Xiuwen Liu
Pablo Rengifo-Moreno
Zhe He
26
0
0
14 May 2019
What Clinicians Want: Contextualizing Explainable Machine Learning for
  Clinical End Use
What Clinicians Want: Contextualizing Explainable Machine Learning for Clinical End Use
S. Tonekaboni
Shalmali Joshi
M. Mccradden
Anna Goldenberg
100
403
0
13 May 2019
Explainable AI for Trees: From Local Explanations to Global
  Understanding
Explainable AI for Trees: From Local Explanations to Global Understanding
Scott M. Lundberg
G. Erion
Hugh Chen
A. DeGrave
J. Prutkin
B. Nair
R. Katz
J. Himmelfarb
N. Bansal
Su-In Lee
FAtt
122
291
0
11 May 2019
On the Connection Between Adversarial Robustness and Saliency Map
  Interpretability
On the Connection Between Adversarial Robustness and Saliency Map Interpretability
Christian Etmann
Sebastian Lunz
Peter Maass
Carola-Bibiane Schönlieb
AAMLFAtt
63
162
0
10 May 2019
Unsupervised Detection of Distinctive Regions on 3D Shapes
Unsupervised Detection of Distinctive Regions on 3D Shapes
Xianzhi Li
Lequan Yu
Chi-Wing Fu
Daniel Cohen-Or
Pheng-Ann Heng
3DPC
95
20
0
05 May 2019
Temporal Graph Convolutional Networks for Automatic Seizure Detection
Temporal Graph Convolutional Networks for Automatic Seizure Detection
Ian Covert
B. Krishnan
I. Najm
Jiening Zhan
Matthew Shore
J. Hixson
M. Po
60
71
0
03 May 2019
Visualizing Deep Networks by Optimizing with Integrated Gradients
Visualizing Deep Networks by Optimizing with Integrated Gradients
Zhongang Qi
Saeed Khorram
Fuxin Li
FAtt
83
126
0
02 May 2019
Full-Gradient Representation for Neural Network Visualization
Full-Gradient Representation for Neural Network Visualization
Suraj Srinivas
François Fleuret
MILMFAtt
120
278
0
02 May 2019
"Why Should You Trust My Explanation?" Understanding Uncertainty in LIME
  Explanations
"Why Should You Trust My Explanation?" Understanding Uncertainty in LIME Explanations
Hui Fen Tan
Kuangyan Song
Yiming Sun
Yujia Zhang
Madeilene Udell
FAtt
122
19
0
29 Apr 2019
Property Inference for Deep Neural Networks
Property Inference for Deep Neural Networks
D. Gopinath
Hayes Converse
C. Păsăreanu
Ankur Taly
74
8
0
29 Apr 2019
Evaluating Recurrent Neural Network Explanations
Evaluating Recurrent Neural Network Explanations
L. Arras
Ahmed Osman
K. Müller
Wojciech Samek
XAIFAtt
117
88
0
26 Apr 2019
Explaining a prediction in some nonlinear models
Cosimo Izzo
FAtt
41
0
0
21 Apr 2019
Uncovering convolutional neural network decisions for diagnosing
  multiple sclerosis on conventional MRI using layer-wise relevance propagation
Uncovering convolutional neural network decisions for diagnosing multiple sclerosis on conventional MRI using layer-wise relevance propagation
Fabian Eitel
Emily Soehler
J. Bellmann-Strobl
A. Brandt
K. Ruprecht
...
M. Weygandt
J. Haynes
M. Scheel
Friedemann Paul
K. Ritter
82
134
0
18 Apr 2019
Generating Minimal Adversarial Perturbations with Integrated Adaptive Gradients
Yatie Xiao
Chi-Man Pun
AAMLGANTTA
21
0
0
12 Apr 2019
Software and application patterns for explanation methods
Software and application patterns for explanation methods
Maximilian Alber
80
11
0
09 Apr 2019
Diabetes Mellitus Forecasting Using Population Health Data in Ontario,
  Canada
Diabetes Mellitus Forecasting Using Population Health Data in Ontario, Canada
Mathieu Ravaut
Hamed Sadeghi
Kin Kwan Leung
M. Volkovs
L. Rosella
OOD
32
6
0
08 Apr 2019
Visualization of Convolutional Neural Networks for Monocular Depth
  Estimation
Visualization of Convolutional Neural Networks for Monocular Depth Estimation
Junjie Hu
Yan Zhang
Takayuki Okatani
MDE
127
83
0
06 Apr 2019
Summit: Scaling Deep Learning Interpretability by Visualizing Activation
  and Attribution Summarizations
Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations
Fred Hohman
Haekyu Park
Caleb Robinson
Duen Horng Chau
FAtt3DHHAI
99
218
0
04 Apr 2019
Finding and Visualizing Weaknesses of Deep Reinforcement Learning Agents
Finding and Visualizing Weaknesses of Deep Reinforcement Learning Agents
Christian Rupprecht
Cyril Ibrahim
C. Pal
96
32
0
02 Apr 2019
Relative Attributing Propagation: Interpreting the Comparative
  Contributions of Individual Units in Deep Neural Networks
Relative Attributing Propagation: Interpreting the Comparative Contributions of Individual Units in Deep Neural Networks
Woo-Jeoung Nam
Shir Gur
Jaesik Choi
Lior Wolf
Seong-Whan Lee
FAtt
83
99
0
01 Apr 2019
Interpreting Black Box Models via Hypothesis Testing
Interpreting Black Box Models via Hypothesis Testing
Collin Burns
Jesse Thomason
Wesley Tansey
FAtt
80
9
0
29 Mar 2019
Bridging Adversarial Robustness and Gradient Interpretability
Bridging Adversarial Robustness and Gradient Interpretability
Beomsu Kim
Junghoon Seo
Taegyun Jeon
AAML
84
40
0
27 Mar 2019
On Attribution of Recurrent Neural Network Predictions via Additive
  Decomposition
On Attribution of Recurrent Neural Network Predictions via Additive Decomposition
Mengnan Du
Ninghao Liu
Fan Yang
Shuiwang Ji
Helen Zhou
FAtt
71
51
0
27 Mar 2019
Explaining Deep Neural Networks with a Polynomial Time Algorithm for
  Shapley Values Approximation
Explaining Deep Neural Networks with a Polynomial Time Algorithm for Shapley Values Approximation
Marco Ancona
Cengiz Öztireli
Markus Gross
FAttTDI
121
230
0
26 Mar 2019
Approximating CNNs with Bag-of-local-Features models works surprisingly
  well on ImageNet
Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet
Wieland Brendel
Matthias Bethge
SSLFAtt
207
562
0
20 Mar 2019
NeuralHydrology -- Interpreting LSTMs in Hydrology
NeuralHydrology -- Interpreting LSTMs in Hydrology
Frederik Kratzert
M. Herrnegger
D. Klotz
Sepp Hochreiter
Günter Klambauer
60
86
0
19 Mar 2019
Attribution-driven Causal Analysis for Detection of Adversarial Examples
Attribution-driven Causal Analysis for Detection of Adversarial Examples
Susmit Jha
Sunny Raj
S. Fernandes
Sumit Kumar Jha
S. Jha
Gunjan Verma
B. Jalaeian
A. Swami
AAML
75
17
0
14 Mar 2019
Activation Analysis of a Byte-Based Deep Neural Network for Malware
  Classification
Activation Analysis of a Byte-Based Deep Neural Network for Malware Classification
Scott E. Coull
Christopher Gardner
64
51
0
12 Mar 2019
Previous
123...5455565758
Next