Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1703.01365
Cited By
v1
v2 (latest)
Axiomatic Attribution for Deep Networks
4 March 2017
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OOD
FAtt
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Axiomatic Attribution for Deep Networks"
50 / 2,871 papers shown
Title
Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey
Arun Das
P. Rad
XAI
188
608
0
16 Jun 2020
ICAM: Interpretable Classification via Disentangled Representations and Feature Attribution Mapping
Cher Bass
Mariana da Silva
Carole Sudre
Petru-Daniel Tudosiu
Stephen M. Smith
E. C. Robinson
FAtt
58
40
0
15 Jun 2020
Loss Rate Forecasting Framework Based on Macroeconomic Changes: Application to US Credit Card Industry
Sajjad Taghiyeh
D. Lengacher
R. Handfield
11
11
0
14 Jun 2020
On Saliency Maps and Adversarial Robustness
Puneet Mangla
Vedant Singh
V. Balasubramanian
AAML
47
17
0
14 Jun 2020
OrigamiNet: Weakly-Supervised, Segmentation-Free, One-Step, Full Page Text Recognition by learning to unfold
Mohamed Yousef
Tom E. Bishop
AI4TS
94
82
0
12 Jun 2020
Explaining Local, Global, And Higher-Order Interactions In Deep Learning
Samuel Lerman
Chenliang Xu
C. Venuto
Henry A. Kautz
FAtt
125
10
0
12 Jun 2020
SegNBDT: Visual Decision Rules for Segmentation
Alvin Wan
Daniel Ho
You Song
Henk Tillman
Sarah Adel Bargal
Joseph E. Gonzalez
SSeg
108
6
0
11 Jun 2020
Getting a CLUE: A Method for Explaining Uncertainty Estimates
Javier Antorán
Umang Bhatt
T. Adel
Adrian Weller
José Miguel Hernández-Lobato
UQCV
BDL
110
117
0
11 Jun 2020
Exploring Weaknesses of VQA Models through Attribution Driven Insights
Shaunak Halbe
42
2
0
11 Jun 2020
3D Point Cloud Feature Explanations Using Gradient-Based Methods
A. Gupta
Simon Watson
Hujun Yin
3DPC
46
28
0
09 Jun 2020
Adversarial Infidelity Learning for Model Interpretation
Jian Liang
Bing Bai
Yuren Cao
Kun Bai
Fei Wang
AAML
103
18
0
09 Jun 2020
I know why you like this movie: Interpretable Efficient Multimodal Recommender
Barbara Rychalska
Dominika Basaj
Jacek Dkabrowski
Michal Daniluk
21
3
0
09 Jun 2020
A Baseline for Shapley Values in MLPs: from Missingness to Neutrality
Cosimo Izzo
Aldo Lipani
Ramin Okhrati
F. Medda
FAtt
68
18
0
08 Jun 2020
The Penalty Imposed by Ablated Data Augmentation
Frederick Liu
A. Najmi
Mukund Sundararajan
56
6
0
08 Jun 2020
Nonparametric Feature Impact and Importance
T. Parr
James D. Wilson
J. Hamrick
FAtt
133
31
0
08 Jun 2020
Re-understanding Finite-State Representations of Recurrent Policy Networks
Mohamad H. Danesh
Anurag Koul
Alan Fern
Saeed Khorram
69
21
0
06 Jun 2020
Higher-Order Explanations of Graph Neural Networks via Relevant Walks
Thomas Schnake
Oliver Eberle
Jonas Lederer
Shinichi Nakajima
Kristof T. Schütt
Klaus-Robert Muller
G. Montavon
113
224
0
05 Jun 2020
Exploration of Interpretability Techniques for Deep COVID-19 Classification using Chest X-ray Images
S. Chatterjee
Fatima Saad
Chompunuch Sarasaen
Suhita Ghosh
Valerie Krug
...
Petia Radeva
G. Rose
Sebastian Stober
Oliver Speck
A. Nürnberger
104
26
0
03 Jun 2020
Shapley Value as Principled Metric for Structured Network Pruning
Marco Ancona
Cengiz Öztireli
Markus Gross
67
9
0
02 Jun 2020
Evaluations and Methods for Explanation through Robustness Analysis
Cheng-Yu Hsieh
Chih-Kuan Yeh
Xuanqing Liu
Pradeep Ravikumar
Seungyeon Kim
Sanjiv Kumar
Cho-Jui Hsieh
XAI
70
58
0
31 May 2020
RelEx: A Model-Agnostic Relational Model Explainer
Yue Zhang
David DeFazio
Arti Ramesh
57
110
0
30 May 2020
Explainable Artificial Intelligence: a Systematic Review
Giulia Vilone
Luca Longo
XAI
110
271
0
29 May 2020
Assessing the validity of saliency maps for abnormality localization in medical imaging
N. Arun
N. Gaw
Praveer Singh
Ken Chang
K. Hoebel
J. Patel
M. Gidwani
Jayashree Kalpathy-Cramer
65
22
0
29 May 2020
Explainable deep learning models in medical image analysis
Amitojdeep Singh
S. Sengupta
Vasudevan Lakshminarayanan
XAI
102
502
0
28 May 2020
Rationalizing Text Matching: Learning Sparse Alignments via Optimal Transport
Kyle Swanson
L. Yu
Tao Lei
OT
67
37
0
27 May 2020
Domain Specific, Semi-Supervised Transfer Learning for Medical Imaging
Jitender Singh Virk
Deepti R. Bathula
26
3
0
24 May 2020
Interpretable and Accurate Fine-grained Recognition via Region Grouping
Zixuan Huang
Yin Li
85
141
0
21 May 2020
An Adversarial Approach for Explaining the Predictions of Deep Neural Networks
Arash Rahnama
A.-Yu Tseng
FAtt
AAML
FaML
46
5
0
20 May 2020
Deep learning approaches for neural decoding: from CNNs to LSTMs and spikes to fMRI
J. Livezey
Joshua I. Glaser
AI4CE
100
9
0
19 May 2020
Local and Global Explanations of Agent Behavior: Integrating Strategy Summaries with Saliency Maps
Tobias Huber
Katharina Weitz
Elisabeth André
Ofra Amir
FAtt
77
67
0
18 May 2020
Reliable Local Explanations for Machine Listening
Saumitra Mishra
Emmanouil Benetos
Bob L. T. Sturm
S. Dixon
AAML
FAtt
46
21
0
15 May 2020
Explaining Black Box Predictions and Unveiling Data Artifacts through Influence Functions
Xiaochuang Han
Byron C. Wallace
Yulia Tsvetkov
MILM
FAtt
AAML
TDI
95
174
0
14 May 2020
DeepRobust: A PyTorch Library for Adversarial Attacks and Defenses
Yaxin Li
Wei Jin
Han Xu
Jiliang Tang
AAML
90
133
0
13 May 2020
Towards Frequency-Based Explanation for Robust CNN
Zifan Wang
Yilin Yang
Ankit Shrivastava
Varun Rawal
Zihao Ding
AAML
FAtt
57
49
0
06 May 2020
Interpretable Learning-to-Rank with Generalized Additive Models
Honglei Zhuang
Xuanhui Wang
Michael Bendersky
Alexander Grushetsky
Yonghui Wu
Petr Mitrichev
Ethan Sterling
Nathan Bell
Walker Ravina
Hai Qian
AI4CE
FAtt
86
12
0
06 May 2020
Contextualizing Hate Speech Classifiers with Post-hoc Explanation
Brendan Kennedy
Xisen Jin
Aida Mostafazadeh Davani
Morteza Dehghani
Xiang Ren
135
142
0
05 May 2020
On Interpretability of Deep Learning based Skin Lesion Classifiers using Concept Activation Vectors
Adriano Lucieri
Muhammad Naseer Bajwa
S. Braun
M. I. Malik
Andreas Dengel
Sheraz Ahmed
MedIm
244
65
0
05 May 2020
Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior?
Peter Hase
Joey Tianyi Zhou
FAtt
85
305
0
04 May 2020
Do Gradient-based Explanations Tell Anything About Adversarial Robustness to Android Malware?
Marco Melis
Michele Scalas
Ambra Demontis
Davide Maiorca
Battista Biggio
Giorgio Giacinto
Fabio Roli
AAML
FAtt
56
28
0
04 May 2020
Explaining AI-based Decision Support Systems using Concept Localization Maps
Adriano Lucieri
Muhammad Naseer Bajwa
Andreas Dengel
Sheraz Ahmed
72
27
0
04 May 2020
Influence Paths for Characterizing Subject-Verb Number Agreement in LSTM Language Models
Kaiji Lu
Piotr (Peter) Mardziel
Klas Leino
Matt Fredrikson
Anupam Datta
81
10
0
03 May 2020
Evaluating and Aggregating Feature-based Model Explanations
Umang Bhatt
Adrian Weller
J. M. F. Moura
XAI
112
227
0
01 May 2020
Towards Visually Explaining Video Understanding Networks with Perturbation
Zhenqiang Li
Weimin Wang
Zuoyue Li
Yifei Huang
Yoichi Sato
FAtt
38
3
0
01 May 2020
Hide-and-Seek: A Template for Explainable AI
Thanos Tagaris
A. Stafylopatis
26
6
0
30 Apr 2020
Attribution Analysis of Grammatical Dependencies in LSTMs
Sophie Hao
39
3
0
30 Apr 2020
How do Decisions Emerge across Layers in Neural Models? Interpretation with Differentiable Masking
Nicola De Cao
Michael Schlichtkrull
Wilker Aziz
Ivan Titov
76
92
0
30 Apr 2020
Modelling Suspense in Short Stories as Uncertainty Reduction over Neural Representation
David Wilmot
Frank Keller
67
22
0
30 Apr 2020
Robustness Certification of Generative Models
M. Mirman
Timon Gehr
Martin Vechev
AAML
70
21
0
30 Apr 2020
WT5?! Training Text-to-Text Models to Explain their Predictions
Sharan Narang
Colin Raffel
Katherine Lee
Adam Roberts
Noah Fiedel
Karishma Malkan
82
201
0
30 Apr 2020
Explainable Deep Learning: A Field Guide for the Uninitiated
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAML
XAI
120
382
0
30 Apr 2020
Previous
1
2
3
...
49
50
51
...
56
57
58
Next