Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1703.01365
Cited By
v1
v2 (latest)
Axiomatic Attribution for Deep Networks
4 March 2017
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OOD
FAtt
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Axiomatic Attribution for Deep Networks"
50 / 2,871 papers shown
Title
Unwrapping The Black Box of Deep ReLU Networks: Interpretability, Diagnostics, and Simplification
Agus Sudjianto
William Knauth
Rahul Singh
Zebin Yang
Aijun Zhang
FAtt
69
46
0
08 Nov 2020
Feature Removal Is a Unifying Principle for Model Explanation Methods
Ian Covert
Scott M. Lundberg
Su-In Lee
FAtt
138
34
0
06 Nov 2020
A survey on practical adversarial examples for malware classifiers
Daniel Park
B. Yener
AAML
96
16
0
06 Nov 2020
Deep Transfer Learning for Automated Diagnosis of Skin Lesions from Photographs
Emma Rocheteau
Doyoon Kim
MedIm
25
4
0
06 Nov 2020
Explainable AI meets Healthcare: A Study on Heart Disease Dataset
Devam Dave
Het Naik
Smiti Singhal
Pankesh Patel
55
64
0
06 Nov 2020
Explain by Evidence: An Explainable Memory-based Neural Network for Question Answering
Quan Hung Tran
Nhan Dam
T. Lai
Franck Dernoncourt
Trung Le
Nham Le
Dinh Q. Phung
FAtt
38
4
0
05 Nov 2020
Training Transformers for Information Security Tasks: A Case Study on Malicious URL Prediction
Ethan M. Rudd
Ahmed Abdallah
55
5
0
05 Nov 2020
Learning and Evaluating Representations for Deep One-class Classification
Kihyuk Sohn
Chun-Liang Li
Jinsung Yoon
Minho Jin
Tomas Pfister
SSL
179
203
0
04 Nov 2020
A BERT-based Dual Embedding Model for Chinese Idiom Prediction
Minghuan Tan
Jing Jiang
50
8
0
04 Nov 2020
Influence Patterns for Explaining Information Flow in BERT
Kaiji Lu
Zifan Wang
Piotr (Peter) Mardziel
Anupam Datta
GNN
105
16
0
02 Nov 2020
Machine versus Human Attention in Deep Reinforcement Learning Tasks
Sihang Guo
Ruohan Zhang
Bo Liu
Yifeng Zhu
M. Hayhoe
D. Ballard
Peter Stone
OffRL
101
28
0
29 Oct 2020
Attribution Preservation in Network Compression for Reliable Network Interpretation
Geondo Park
J. Yang
Sung Ju Hwang
Eunho Yang
57
5
0
28 Oct 2020
Technical Note: Game-Theoretic Interactions of Different Orders
Hao Zhang
Xu Cheng
Yiting Chen
Quanshi Zhang
79
15
0
28 Oct 2020
Shapley Flow: A Graph-based Approach to Interpreting Model Predictions
Jiaxuan Wang
Jenna Wiens
Scott M. Lundberg
FAtt
129
90
0
27 Oct 2020
Interpretation of NLP models through input marginalization
Siwon Kim
Jihun Yi
Eunji Kim
Sungroh Yoon
MILM
FAtt
94
60
0
27 Oct 2020
Benchmarking Deep Learning Interpretability in Time Series Predictions
Aya Abdelsalam Ismail
Mohamed K. Gunady
H. C. Bravo
Soheil Feizi
XAI
AI4TS
FAtt
72
174
0
26 Oct 2020
Bayesian Importance of Features (BIF)
Kamil Adamczewski
Frederik Harder
Mijung Park
FAtt
36
2
0
26 Oct 2020
Investigating Saturation Effects in Integrated Gradients
Vivek Miglani
Narine Kokhlikyan
B. Alsallakh
Miguel Martin
Orion Reblitz-Richardson
FAtt
111
26
0
23 Oct 2020
Towards falsifiable interpretability research
Matthew L. Leavitt
Ari S. Morcos
AAML
AI4CE
84
68
0
22 Oct 2020
A Multilinear Sampling Algorithm to Estimate Shapley Values
Ramin Okhrati
Aldo Lipani
TDI
FAtt
139
43
0
22 Oct 2020
Analyzing the Source and Target Contributions to Predictions in Neural Machine Translation
Elena Voita
Rico Sennrich
Ivan Titov
87
86
0
21 Oct 2020
TurnGPT: a Transformer-based Language Model for Predicting Turn-taking in Spoken Dialog
Erik Ekstedt
Gabriel Skantze
94
59
0
21 Oct 2020
The Need for Standardized Explainability
Othman Benchekroun
Adel Rahimi
Qini Zhang
Tetiana Kodliuk
XAI
55
8
0
20 Oct 2020
Investigating and Simplifying Masking-based Saliency Methods for Model Interpretability
Jason Phang
Jungkyu Park
Krzysztof J. Geras
FAtt
AAML
250
8
0
19 Oct 2020
Optimism in the Face of Adversity: Understanding and Improving Deep Learning through Adversarial Robustness
Guillermo Ortiz-Jiménez
Apostolos Modas
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
AAML
121
48
0
19 Oct 2020
Interpretable Machine Learning -- A Brief History, State-of-the-Art and Challenges
Christoph Molnar
Giuseppe Casalicchio
B. Bischl
AI4TS
AI4CE
114
406
0
19 Oct 2020
Towards Interpreting BERT for Reading Comprehension Based QA
Sahana Ramnath
Preksha Nema
Deep Sahni
Mitesh M. Khapra
94
30
0
18 Oct 2020
Evaluating Attribution Methods using White-Box LSTMs
Sophie Hao
FAtt
XAI
66
8
0
16 Oct 2020
Marginal Contribution Feature Importance -- an Axiomatic Approach for The Natural Case
Amnon Catav
Boyang Fu
J. Ernst
S. Sankararaman
Ran Gilad-Bachrach
FAtt
57
3
0
15 Oct 2020
FAR: A General Framework for Attributional Robustness
Adam Ivankay
Ivan Girardi
Chiara Marchiori
P. Frossard
OOD
90
22
0
14 Oct 2020
Human-interpretable model explainability on high-dimensional data
Damien de Mijolla
Christopher Frye
M. Kunesch
J. Mansir
Ilya Feige
FAtt
52
10
0
14 Oct 2020
Geometry matters: Exploring language examples at the decision boundary
Debajyoti Datta
Shashwat Kumar
Laura E. Barnes
Tom Fletcher
AAML
45
3
0
14 Oct 2020
Learning Propagation Rules for Attribution Map Generation
Yiding Yang
Jiayan Qiu
Xiuming Zhang
Dacheng Tao
Xinchao Wang
FAtt
71
17
0
14 Oct 2020
Pair the Dots: Jointly Examining Training History and Test Stimuli for Model Interpretability
Yuxian Meng
Chun Fan
Zijun Sun
Eduard H. Hovy
Leilei Gan
Jiwei Li
FAtt
78
10
0
14 Oct 2020
F1 is Not Enough! Models and Evaluation Towards User-Centered Explainable Question Answering
Hendrik Schuff
Heike Adel
Ngoc Thang Vu
ELM
65
18
0
13 Oct 2020
Machine Learning for Material Characterization with an Application for Predicting Mechanical Properties
Anke Stoll
P. Benner
AI4CE
62
66
0
12 Oct 2020
Embedded methods for feature selection in neural networks
K. VinayVarma
36
8
0
12 Oct 2020
The elephant in the interpretability room: Why use attention as explanation when we have saliency methods?
Jasmijn Bastings
Katja Filippova
XAI
LRM
111
179
0
12 Oct 2020
Gradient-based Analysis of NLP Models is Manipulable
Junlin Wang
Jens Tuyls
Eric Wallace
Sameer Singh
AAML
FAtt
80
60
0
12 Oct 2020
Contrastive Explanations for Reinforcement Learning via Embedded Self Predictions
Zhengxian Lin
Kim-Ho Lam
Alan Fern
SSL
79
24
0
11 Oct 2020
Exathlon: A Benchmark for Explainable Anomaly Detection over Time Series
Vincent Jacob
Fei Song
Arnaud Stiegler
Bijan Rad
Y. Diao
Nesime Tatbul
AI4TS
100
68
0
10 Oct 2020
Interpreting Multivariate Shapley Interactions in DNNs
Hao Zhang
Yichen Xie
Longjie Zheng
Die Zhang
Quanshi Zhang
TDI
FAtt
90
7
0
10 Oct 2020
Evaluating and Characterizing Human Rationales
Samuel Carton
Anirudh Rathore
Chenhao Tan
74
49
0
09 Oct 2020
A Unified Approach to Interpreting and Boosting Adversarial Transferability
Xin Eric Wang
Jie Ren
Shuyu Lin
Xiangming Zhu
Yisen Wang
Quanshi Zhang
AAML
143
96
0
08 Oct 2020
IS-CAM: Integrated Score-CAM for axiomatic-based explanations
Rakshit Naidu
Ankita Ghosh
Yash Maurya
K. ShamanthRNayak
Soumya Snigdha Kundu
FAtt
121
46
0
06 Oct 2020
Anomaly Detection Approach to Identify Early Cases in a Pandemic using Chest X-rays
Shehroz S. Khan
Faraz Khoshbakhtian
A. Ashraf
29
7
0
06 Oct 2020
Visualizing Color-wise Saliency of Black-Box Image Classification Models
Yuhki Hatakeyama
Hiroki Sakuma
Yoshinori Konishi
Kohei Suenaga
FAtt
64
3
0
06 Oct 2020
Are Neural Nets Modular? Inspecting Functional Modularity Through Differentiable Weight Masks
Róbert Csordás
Sjoerd van Steenkiste
Jürgen Schmidhuber
104
97
0
05 Oct 2020
Remembering for the Right Reasons: Explanations Reduce Catastrophic Forgetting
Sayna Ebrahimi
Suzanne Petryk
Akash Gokul
William Gan
Joseph E. Gonzalez
Marcus Rohrbach
Trevor Darrell
CLL
83
47
0
04 Oct 2020
Explaining Deep Neural Networks
Oana-Maria Camburu
XAI
FAtt
110
26
0
04 Oct 2020
Previous
1
2
3
...
46
47
48
...
56
57
58
Next