ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1703.01365
  4. Cited By
Axiomatic Attribution for Deep Networks
v1v2 (latest)

Axiomatic Attribution for Deep Networks

4 March 2017
Mukund Sundararajan
Ankur Taly
Qiqi Yan
    OODFAtt
ArXiv (abs)PDFHTML

Papers citing "Axiomatic Attribution for Deep Networks"

50 / 2,871 papers shown
Title
Towards the Unification and Robustness of Perturbation and Gradient
  Based Explanations
Towards the Unification and Robustness of Perturbation and Gradient Based Explanations
Sushant Agarwal
S. Jabbari
Chirag Agarwal
Sohini Upadhyay
Zhiwei Steven Wu
Himabindu Lakkaraju
FAttAAML
93
64
0
21 Feb 2021
Gifsplanation via Latent Shift: A Simple Autoencoder Approach to
  Counterfactual Generation for Chest X-rays
Gifsplanation via Latent Shift: A Simple Autoencoder Approach to Counterfactual Generation for Chest X-rays
Joseph Paul Cohen
Rupert Brooks
Sovann En
Evan Zucker
Anuj Pareek
M. Lungren
Akshay S. Chaudhari
FAttMedIm
85
4
0
18 Feb 2021
Unified Shapley Framework to Explain Prediction Drift
Unified Shapley Framework to Explain Prediction Drift
Aalok Shanbhag
A. Ghosh
Josh Rubin
FAttFedMLAI4TS
72
3
0
15 Feb 2021
Integrated Grad-CAM: Sensitivity-Aware Visual Explanation of Deep
  Convolutional Networks via Integrated Gradient-Based Scoring
Integrated Grad-CAM: Sensitivity-Aware Visual Explanation of Deep Convolutional Networks via Integrated Gradient-Based Scoring
S. Sattarzadeh
M. Sudhakar
Konstantinos N. Plataniotis
Jongseong Jang
Yeonjeong Jeong
Hyunwoo J. Kim
FAtt
59
39
0
15 Feb 2021
Ada-SISE: Adaptive Semantic Input Sampling for Efficient Explanation of
  Convolutional Neural Networks
Ada-SISE: Adaptive Semantic Input Sampling for Efficient Explanation of Convolutional Neural Networks
M. Sudhakar
S. Sattarzadeh
Konstantinos N. Plataniotis
Jongseong Jang
Yeonjeong Jeong
Hyunwoo J. Kim
AAML
65
11
0
15 Feb 2021
Attribution Mask: Filtering Out Irrelevant Features By Recursively
  Focusing Attention on Inputs of DNNs
Attribution Mask: Filtering Out Irrelevant Features By Recursively Focusing Attention on Inputs of DNNs
Jaehwan Lee
Joon‐Hyuk Chang
TDIFAtt
98
0
0
15 Feb 2021
MIMIC-IF: Interpretability and Fairness Evaluation of Deep Learning
  Models on MIMIC-IV Dataset
MIMIC-IF: Interpretability and Fairness Evaluation of Deep Learning Models on MIMIC-IV Dataset
Chuizheng Meng
Loc Trinh
Nan Xu
Yan Liu
67
30
0
12 Feb 2021
SCOUT: Socially-COnsistent and UndersTandable Graph Attention Network
  for Trajectory Prediction of Vehicles and VRUs
SCOUT: Socially-COnsistent and UndersTandable Graph Attention Network for Trajectory Prediction of Vehicles and VRUs
Sandra Carrasco
David Fernández Llorca
Miguel Ángel Sotelo
56
53
0
12 Feb 2021
What does LIME really see in images?
What does LIME really see in images?
Damien Garreau
Dina Mardaoui
FAtt
64
40
0
11 Feb 2021
Inductive Granger Causal Modeling for Multivariate Time Series
Inductive Granger Causal Modeling for Multivariate Time Series
Yunfei Chu
Xiaowei Wang
Jianxin Ma
Kunyang Jia
Jingren Zhou
Hongxia Yang
CMLAI4TS
63
11
0
10 Feb 2021
WheaCha: A Method for Explaining the Predictions of Models of Code
WheaCha: A Method for Explaining the Predictions of Models of Code
Yu Wang
Ke Wang
Linzhang Wang
FAtt
52
3
0
09 Feb 2021
RECAST: Enabling User Recourse and Interpretability of Toxicity
  Detection Models with Interactive Visualization
RECAST: Enabling User Recourse and Interpretability of Toxicity Detection Models with Interactive Visualization
Austin P. Wright
Omar Shaikh
Haekyu Park
Will Epperson
Muhammed Ahmed
Stephane Pinel
Duen Horng Chau
Diyi Yang
44
22
0
08 Feb 2021
Achieving Explainability for Plant Disease Classification with
  Disentangled Variational Autoencoders
Achieving Explainability for Plant Disease Classification with Disentangled Variational Autoencoders
Harshana Habaragamuwa
Y. Oishi
Kenichi Tanaka
106
9
0
05 Feb 2021
HYDRA: Hypergradient Data Relevance Analysis for Interpreting Deep
  Neural Networks
HYDRA: Hypergradient Data Relevance Analysis for Interpreting Deep Neural Networks
Yuanyuan Chen
Boyang Albert Li
Han Yu
Pengcheng Wu
Chunyan Miao
TDI
106
42
0
04 Feb 2021
IWA: Integrated Gradient based White-box Attacks for Fooling Deep Neural
  Networks
IWA: Integrated Gradient based White-box Attacks for Fooling Deep Neural Networks
Yixiang Wang
Jiqiang Liu
Xiaolin Chang
J. Misic
Vojislav B. Mišić
AAML
69
12
0
03 Feb 2021
Learning domain-agnostic visual representation for computational
  pathology using medically-irrelevant style transfer augmentation
Learning domain-agnostic visual representation for computational pathology using medically-irrelevant style transfer augmentation
R. Yamashita
J. Long
Snikitha Banda
Jeanne Shen
D. Rubin
OODMedIm
70
51
0
02 Feb 2021
Counterfactual Generation with Knockoffs
Counterfactual Generation with Knockoffs
Oana-Iuliana Popescu
M. Shadaydeh
Joachim Denzler
42
6
0
01 Feb 2021
Hierarchical Variational Autoencoder for Visual Counterfactuals
Hierarchical Variational Autoencoder for Visual Counterfactuals
Nicolas Vercheval
A. Pižurica
CMLDRLBDL
87
2
0
01 Feb 2021
Counterfactual State Explanations for Reinforcement Learning Agents via
  Generative Deep Learning
Counterfactual State Explanations for Reinforcement Learning Agents via Generative Deep Learning
Matthew Lyle Olson
Roli Khanna
Lawrence Neal
Fuxin Li
Weng-Keen Wong
CML
92
75
0
29 Jan 2021
Explaining Natural Language Processing Classifiers with Occlusion and
  Language Modeling
Explaining Natural Language Processing Classifiers with Occlusion and Language Modeling
David Harbecke
AAML
53
2
0
28 Jan 2021
Evaluating Input Perturbation Methods for Interpreting CNNs and Saliency
  Map Comparison
Evaluating Input Perturbation Methods for Interpreting CNNs and Saliency Map Comparison
Lukas Brunke
Prateek Agrawal
Nikhil George
AAMLFAtt
59
13
0
26 Jan 2021
Show or Suppress? Managing Input Uncertainty in Machine Learning Model
  Explanations
Show or Suppress? Managing Input Uncertainty in Machine Learning Model Explanations
Danding Wang
Wencan Zhang
Brian Y. Lim
FAtt
56
22
0
23 Jan 2021
i-Algebra: Towards Interactive Interpretability of Deep Neural Networks
i-Algebra: Towards Interactive Interpretability of Deep Neural Networks
Xinyang Zhang
Ren Pang
S. Ji
Fenglong Ma
Ting Wang
HAIAI4CE
38
5
0
22 Jan 2021
Benchmarking Perturbation-based Saliency Maps for Explaining Atari
  Agents
Benchmarking Perturbation-based Saliency Maps for Explaining Atari Agents
Tobias Huber
Benedikt Limmer
Elisabeth André
FAtt
39
14
0
18 Jan 2021
Generative Counterfactuals for Neural Networks via Attribute-Informed
  Perturbation
Generative Counterfactuals for Neural Networks via Attribute-Informed Perturbation
Fan Yang
Ninghao Liu
Mengnan Du
X. Hu
OOD
53
17
0
18 Jan 2021
Generating Attribution Maps with Disentangled Masked Backpropagation
Generating Attribution Maps with Disentangled Masked Backpropagation
Adria Ruiz
Antonio Agudo
Francesc Moreno
FAtt
40
1
0
17 Jan 2021
Robusta: Robust AutoML for Feature Selection via Reinforcement Learning
Robusta: Robust AutoML for Feature Selection via Reinforcement Learning
Xiaoyang Sean Wang
Yue Liu
Yibo Jacky Zhang
B. Kailkhura
Klara Nahrstedt
26
3
0
15 Jan 2021
Explainability of deep vision-based autonomous driving systems: Review
  and challenges
Explainability of deep vision-based autonomous driving systems: Review and challenges
Éloi Zablocki
H. Ben-younes
P. Pérez
Matthieu Cord
XAI
186
178
0
13 Jan 2021
Convolutional Neural Nets in Chemical Engineering: Foundations,
  Computations, and Applications
Convolutional Neural Nets in Chemical Engineering: Foundations, Computations, and Applications
Shengli Jiang
Victor M. Zavala
AI4CE
36
28
0
13 Jan 2021
Explaining the Black-box Smoothly- A Counterfactual Approach
Explaining the Black-box Smoothly- A Counterfactual Approach
Junyu Chen
Yong Du
Yufan He
W. Paul Segars
Ye Li
MedImFAtt
152
105
0
11 Jan 2021
SyReNN: A Tool for Analyzing Deep Neural Networks
SyReNN: A Tool for Analyzing Deep Neural Networks
Matthew Sotoudeh
Aditya V. Thakur
AAMLGNN
63
16
0
09 Jan 2021
Progressive Interpretation Synthesis: Interpreting Task Solving by
  Quantifying Previously Used and Unused Information
Progressive Interpretation Synthesis: Interpreting Task Solving by Quantifying Previously Used and Unused Information
Zhengqi He
Taro Toyoizumi
54
1
0
08 Jan 2021
Who's a Good Boy? Reinforcing Canine Behavior in Real-Time using Machine
  Learning
Who's a Good Boy? Reinforcing Canine Behavior in Real-Time using Machine Learning
Jason Stock
Tom Cavey
20
2
0
07 Jan 2021
Robust Machine Learning Systems: Challenges, Current Trends,
  Perspectives, and the Road Ahead
Robust Machine Learning Systems: Challenges, Current Trends, Perspectives, and the Road Ahead
Mohamed Bennai
Mahum Naseer
T. Theocharides
C. Kyrkou
O. Mutlu
Lois Orosa
Jungwook Choi
OOD
139
101
0
04 Jan 2021
On Baselines for Local Feature Attributions
On Baselines for Local Feature Attributions
Johannes Haug
Stefan Zurn
Peter El-Jiz
Gjergji Kasneci
FAtt
62
31
0
04 Jan 2021
iGOS++: Integrated Gradient Optimized Saliency by Bilateral
  Perturbations
iGOS++: Integrated Gradient Optimized Saliency by Bilateral Perturbations
Saeed Khorram
T. Lawson
Fuxin Li
AAMLFAtt
59
26
0
31 Dec 2020
FastIF: Scalable Influence Functions for Efficient Model Interpretation
  and Debugging
FastIF: Scalable Influence Functions for Efficient Model Interpretation and Debugging
Han Guo
Nazneen Rajani
Peter Hase
Joey Tianyi Zhou
Caiming Xiong
TDI
135
116
0
31 Dec 2020
Quantitative Evaluations on Saliency Methods: An Experimental Study
Quantitative Evaluations on Saliency Methods: An Experimental Study
Xiao-hui Li
Yuhan Shi
Haoyang Li
Wei Bai
Yuanwei Song
Caleb Chen Cao
Lei Chen
FAttXAI
108
20
0
31 Dec 2020
SkiNet: A Deep Learning Solution for Skin Lesion Diagnosis with
  Uncertainty Estimation and Explainability
SkiNet: A Deep Learning Solution for Skin Lesion Diagnosis with Uncertainty Estimation and Explainability
R. Singh
R. Gorantla
Sai Giridhar Allada
N. Pratap
67
3
0
30 Dec 2020
Enhanced Regularizers for Attributional Robustness
Enhanced Regularizers for Attributional Robustness
A. Sarkar
Anirban Sarkar
V. Balasubramanian
65
16
0
28 Dec 2020
A Survey on Neural Network Interpretability
A Survey on Neural Network Interpretability
Yu Zhang
Peter Tiño
A. Leonardis
K. Tang
FaMLXAI
209
691
0
28 Dec 2020
Explaining NLP Models via Minimal Contrastive Editing (MiCE)
Explaining NLP Models via Minimal Contrastive Editing (MiCE)
Alexis Ross
Ana Marasović
Matthew E. Peters
77
122
0
27 Dec 2020
My Teacher Thinks The World Is Flat! Interpreting Automatic Essay
  Scoring Mechanism
My Teacher Thinks The World Is Flat! Interpreting Automatic Essay Scoring Mechanism
Swapnil Parekh
Yaman Kumar Singla
Changyou Chen
Junyi Jessy Li
R. Shah
76
11
0
27 Dec 2020
Inserting Information Bottlenecks for Attribution in Transformers
Inserting Information Bottlenecks for Attribution in Transformers
Zhiying Jiang
Raphael Tang
Ji Xin
Jimmy J. Lin
55
6
0
27 Dec 2020
To what extent do human explanations of model behavior align with actual
  model behavior?
To what extent do human explanations of model behavior align with actual model behavior?
Grusha Prasad
Yixin Nie
Joey Tianyi Zhou
Robin Jia
Douwe Kiela
Adina Williams
73
28
0
24 Dec 2020
QUACKIE: A NLP Classification Task With Ground Truth Explanations
QUACKIE: A NLP Classification Task With Ground Truth Explanations
Yves Rychener
X. Renard
Djamé Seddah
P. Frossard
Marcin Detyniecki
34
3
0
24 Dec 2020
Algorithmic Recourse in the Wild: Understanding the Impact of Data and
  Model Shifts
Algorithmic Recourse in the Wild: Understanding the Impact of Data and Model Shifts
Kaivalya Rawal
Ece Kamar
Himabindu Lakkaraju
96
42
0
22 Dec 2020
Towards Robust Explanations for Deep Neural Networks
Towards Robust Explanations for Deep Neural Networks
Ann-Kathrin Dombrowski
Christopher J. Anders
K. Müller
Pan Kessel
FAtt
96
64
0
18 Dec 2020
Transformer Interpretability Beyond Attention Visualization
Transformer Interpretability Beyond Attention Visualization
Hila Chefer
Shir Gur
Lior Wolf
145
681
0
17 Dec 2020
Weakly-Supervised Action Localization and Action Recognition using
  Global-Local Attention of 3D CNN
Weakly-Supervised Action Localization and Action Recognition using Global-Local Attention of 3D CNN
N. Yudistira
M. Kavitha
Takio Kurita
3DPC
66
13
0
17 Dec 2020
Previous
123...444546...565758
Next