ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1703.01365
  4. Cited By
Axiomatic Attribution for Deep Networks
v1v2 (latest)

Axiomatic Attribution for Deep Networks

4 March 2017
Mukund Sundararajan
Ankur Taly
Qiqi Yan
    OODFAtt
ArXiv (abs)PDFHTML

Papers citing "Axiomatic Attribution for Deep Networks"

50 / 2,871 papers shown
Title
Explaining Convolutional Neural Networks through Attribution-Based Input
  Sampling and Block-Wise Feature Aggregation
Explaining Convolutional Neural Networks through Attribution-Based Input Sampling and Block-Wise Feature Aggregation
S. Sattarzadeh
M. Sudhakar
Anthony Lem
Shervin Mehryar
K. N. Plataniotis
Jongseong Jang
Hyunwoo J. Kim
Yeonjeong Jeong
Sang-Min Lee
Kyunghoon Bae
FAttXAI
55
33
0
01 Oct 2020
Learning Variational Word Masks to Improve the Interpretability of
  Neural Text Classifiers
Learning Variational Word Masks to Improve the Interpretability of Neural Text Classifiers
Hanjie Chen
Yangfeng Ji
AAMLVLM
113
66
0
01 Oct 2020
Interpreting Graph Neural Networks for NLP With Differentiable Edge
  Masking
Interpreting Graph Neural Networks for NLP With Differentiable Edge Masking
Michael Schlichtkrull
Nicola De Cao
Ivan Titov
AI4CE
150
221
0
01 Oct 2020
When will the mist clear? On the Interpretability of Machine Learning
  for Medical Applications: a survey
When will the mist clear? On the Interpretability of Machine Learning for Medical Applications: a survey
A. Banegas-Luna
Jorge Pena-García
Adrian Iftene
F. Guadagni
P. Ferroni
Noemi Scarpato
Fabio Massimo Zanzotto
A. Bueno-Crespo
Horacio Pérez-Sánchez
OOD
41
1
0
01 Oct 2020
Explainable Deep Reinforcement Learning for UAV Autonomous Navigation
Explainable Deep Reinforcement Learning for UAV Autonomous Navigation
Lei He
Nabil Aouf
Bifeng Song
74
11
0
30 Sep 2020
Accurate and Robust Feature Importance Estimation under Distribution
  Shifts
Accurate and Robust Feature Importance Estimation under Distribution Shifts
Jayaraman J. Thiagarajan
V. Narayanaswamy
Rushil Anirudh
P. Bremer
A. Spanias
OOD
63
9
0
30 Sep 2020
Trustworthy Convolutional Neural Networks: A Gradient Penalized-based
  Approach
Trustworthy Convolutional Neural Networks: A Gradient Penalized-based Approach
Nicholas F Halliwell
Freddy Lecue
FAtt
118
9
0
29 Sep 2020
Improving Interpretability for Computer-aided Diagnosis tools on Whole
  Slide Imaging with Multiple Instance Learning and Gradient-based Explanations
Improving Interpretability for Computer-aided Diagnosis tools on Whole Slide Imaging with Multiple Instance Learning and Gradient-based Explanations
Antoine Pirovano
H. Heuberger
Sylvain Berlemont
Saïd Ladjal
Isabelle Bloch
88
12
0
29 Sep 2020
Generating End-to-End Adversarial Examples for Malware Classifiers Using
  Explainability
Generating End-to-End Adversarial Examples for Malware Classifiers Using Explainability
Ishai Rosenberg
Shai Meir
J. Berrebi
I. Gordon
Guillaume Sicard
Eli David
AAMLSILM
31
28
0
28 Sep 2020
Quantitative and Qualitative Evaluation of Explainable Deep Learning
  Methods for Ophthalmic Diagnosis
Quantitative and Qualitative Evaluation of Explainable Deep Learning Methods for Ophthalmic Diagnosis
Amitojdeep Singh
J. Balaji
M. Rasheed
Varadharajan Jayakumar
R. Raman
Vasudevan Lakshminarayanan
BDLXAIFAtt
57
29
0
26 Sep 2020
A Diagnostic Study of Explainability Techniques for Text Classification
A Diagnostic Study of Explainability Techniques for Text Classification
Pepa Atanasova
J. Simonsen
Christina Lioma
Isabelle Augenstein
XAIFAtt
104
226
0
25 Sep 2020
A Unifying Review of Deep and Shallow Anomaly Detection
A Unifying Review of Deep and Shallow Anomaly Detection
Lukas Ruff
Jacob R. Kauffmann
Robert A. Vandermeulen
G. Montavon
Wojciech Samek
Marius Kloft
Thomas G. Dietterich
Klaus-Robert Muller
UQCV
150
806
0
24 Sep 2020
Interpreting and Boosting Dropout from a Game-Theoretic View
Interpreting and Boosting Dropout from a Game-Theoretic View
Hao Zhang
Sen Li
Yinchao Ma
Mingjie Li
Yichen Xie
Quanshi Zhang
FAttAI4CE
92
48
0
24 Sep 2020
Information-Theoretic Visual Explanation for Black-Box Classifiers
Information-Theoretic Visual Explanation for Black-Box Classifiers
Jihun Yi
Eunji Kim
Siwon Kim
Sungroh Yoon
FAtt
88
6
0
23 Sep 2020
The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal
  Sufficient Subsets
The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal Sufficient Subsets
Oana-Maria Camburu
Eleonora Giunchiglia
Jakob N. Foerster
Thomas Lukasiewicz
Phil Blunsom
FAtt
82
23
0
23 Sep 2020
Introspective Learning by Distilling Knowledge from Online
  Self-explanation
Introspective Learning by Distilling Knowledge from Online Self-explanation
Jindong Gu
Zhiliang Wu
Volker Tresp
39
3
0
19 Sep 2020
Principles and Practice of Explainable Machine Learning
Principles and Practice of Explainable Machine Learning
Vaishak Belle
I. Papantonis
FaML
86
454
0
18 Sep 2020
Reconstructing Actions To Explain Deep Reinforcement Learning
Reconstructing Actions To Explain Deep Reinforcement Learning
Xuan Chen
Zifan Wang
Yucai Fan
Bonan Jin
Piotr (Peter) Mardziel
Carlee Joe-Wong
Anupam Datta
FAtt
50
2
0
17 Sep 2020
Evaluating and Mitigating Bias in Image Classifiers: A Causal
  Perspective Using Counterfactuals
Evaluating and Mitigating Bias in Image Classifiers: A Causal Perspective Using Counterfactuals
Saloni Dash
V. Balasubramanian
Amit Sharma
CML
76
70
0
17 Sep 2020
Captum: A unified and generic model interpretability library for PyTorch
Captum: A unified and generic model interpretability library for PyTorch
Narine Kokhlikyan
Vivek Miglani
Miguel Martin
Edward Wang
B. Alsallakh
...
Alexander Melnikov
Natalia Kliushkina
Carlos Araya
Siqi Yan
Orion Reblitz-Richardson
FAtt
207
857
0
16 Sep 2020
Are Interpretations Fairly Evaluated? A Definition Driven Pipeline for
  Post-Hoc Interpretability
Are Interpretations Fairly Evaluated? A Definition Driven Pipeline for Post-Hoc Interpretability
Ninghao Liu
Yunsong Meng
Helen Zhou
Tie Wang
Bo Long
XAIFAtt
79
7
0
16 Sep 2020
Beyond Individualized Recourse: Interpretable and Interactive Summaries
  of Actionable Recourses
Beyond Individualized Recourse: Interpretable and Interactive Summaries of Actionable Recourses
Kaivalya Rawal
Himabindu Lakkaraju
108
11
0
15 Sep 2020
On Robustness and Bias Analysis of BERT-based Relation Extraction
On Robustness and Bias Analysis of BERT-based Relation Extraction
Luoqiu Li
Xiang Chen
Hongbin Ye
Zhen Bi
Shumin Deng
Ningyu Zhang
Huajun Chen
81
18
0
14 Sep 2020
MeLIME: Meaningful Local Explanation for Machine Learning Models
MeLIME: Meaningful Local Explanation for Machine Learning Models
T. Botari
Frederik Hvilshoj
Rafael Izbicki
A. Carvalho
AAMLFAtt
75
16
0
12 Sep 2020
Towards a More Reliable Interpretation of Machine Learning Outputs for
  Safety-Critical Systems using Feature Importance Fusion
Towards a More Reliable Interpretation of Machine Learning Outputs for Safety-Critical Systems using Feature Importance Fusion
D. Rengasamy
Benjamin Rothwell
Grazziela Figueredo
FaMLFAtt
40
35
0
11 Sep 2020
CounteRGAN: Generating Realistic Counterfactuals with Residual
  Generative Adversarial Nets
CounteRGAN: Generating Realistic Counterfactuals with Residual Generative Adversarial Nets
Daniel Nemirovsky
Nicolas Thiebaut
Ye Xu
Abhishek Gupta
CMLGAN
78
35
0
11 Sep 2020
Understanding the Role of Individual Units in a Deep Neural Network
Understanding the Role of Individual Units in a Deep Neural Network
David Bau
Jun-Yan Zhu
Hendrik Strobelt
Àgata Lapedriza
Bolei Zhou
Antonio Torralba
GAN
84
457
0
10 Sep 2020
Learning Shape Features and Abstractions in 3D Convolutional Neural
  Networks for Detecting Alzheimer's Disease
Learning Shape Features and Abstractions in 3D Convolutional Neural Networks for Detecting Alzheimer's Disease
M. Sagar
M. Dyrba
MedIm
15
0
0
10 Sep 2020
XCM: An Explainable Convolutional Neural Network for Multivariate Time
  Series Classification
XCM: An Explainable Convolutional Neural Network for Multivariate Time Series Classification
Kevin Fauvel
Tao R. Lin
Véronique Masson
Elisa Fromont
Alexandre Termier
BDLAI4TS
39
102
0
10 Sep 2020
How Good is your Explanation? Algorithmic Stability Measures to Assess
  the Quality of Explanations for Deep Neural Networks
How Good is your Explanation? Algorithmic Stability Measures to Assess the Quality of Explanations for Deep Neural Networks
Thomas Fel
David Vigouroux
Rémi Cadène
Thomas Serre
XAIFAtt
75
31
0
07 Sep 2020
Quantifying Explainability of Saliency Methods in Deep Neural Networks
  with a Synthetic Dataset
Quantifying Explainability of Saliency Methods in Deep Neural Networks with a Synthetic Dataset
Erico Tjoa
Cuntai Guan
XAIFAtt
103
27
0
07 Sep 2020
Explainable Artificial Intelligence for Process Mining: A General
  Overview and Application of a Novel Local Explanation Approach for Predictive
  Process Monitoring
Explainable Artificial Intelligence for Process Mining: A General Overview and Application of a Novel Local Explanation Approach for Predictive Process Monitoring
Nijat Mehdiyev
Peter Fettke
AI4TS
67
55
0
04 Sep 2020
Decontextualized learning for interpretable hierarchical representations
  of visual patterns
Decontextualized learning for interpretable hierarchical representations of visual patterns
R. I. Etheredge
M. Schartl
Alex Jordan
50
4
0
31 Aug 2020
Real-time Prediction of COVID-19 related Mortality using Electronic
  Health Records
Real-time Prediction of COVID-19 related Mortality using Electronic Health Records
Patrick Schwab
Arash Mehrjou
S. Parbhoo
Leo Anthony Celi
J. Hetzel
M. Hofer
Bernhard Schölkopf
Stefan Bauer
51
51
0
31 Aug 2020
SHAP values for Explaining CNN-based Text Classification Models
SHAP values for Explaining CNN-based Text Classification Models
Wei Zhao
Tarun Joshi
V. Nair
Agus Sudjianto
FAtt
47
37
0
26 Aug 2020
Estimating Example Difficulty Using Variance of Gradients
Estimating Example Difficulty Using Variance of Gradients
Chirag Agarwal
Daniel D'souza
Sara Hooker
306
111
0
26 Aug 2020
DRR4Covid: Learning Automated COVID-19 Infection Segmentation from
  Digitally Reconstructed Radiographs
DRR4Covid: Learning Automated COVID-19 Infection Segmentation from Digitally Reconstructed Radiographs
Pengyi Zhang
Yunxin Zhong
Yulin Deng
Xiaoying Tang
Xiaoqiong Li
77
8
0
26 Aug 2020
Leveraging Organizational Resources to Adapt Models to New Data
  Modalities
Leveraging Organizational Resources to Adapt Models to New Data Modalities
S. Suri
Raghuveer Chanda
Neslihan Bulut
P. Narayana
Yemao Zeng
Peter Bailis
Sugato Basu
G. Narlikar
Christopher Ré
Abishek Sethi
VLMOffRL
57
11
0
23 Aug 2020
Emergent symbolic language based deep medical image classification
Emergent symbolic language based deep medical image classification
Aritra Chowdhury
Alberto Santamaria-Pang
James R. Kubricht
Peter Tu
MedIm
50
11
0
22 Aug 2020
A Unified Taylor Framework for Revisiting Attribution Methods
A Unified Taylor Framework for Revisiting Attribution Methods
Huiqi Deng
Na Zou
Mengnan Du
Weifu Chen
Guo-Can Feng
Helen Zhou
FAttTDI
146
21
0
21 Aug 2020
iCaps: An Interpretable Classifier via Disentangled Capsule Networks
iCaps: An Interpretable Classifier via Disentangled Capsule Networks
Dahuin Jung
Jonghyun Lee
Jihun Yi
Sungroh Yoon
136
12
0
20 Aug 2020
Explainability in Deep Reinforcement Learning
Explainability in Deep Reinforcement Learning
Alexandre Heuillet
Fabien Couthouis
Natalia Díaz Rodríguez
XAI
249
284
0
15 Aug 2020
Survey of XAI in digital pathology
Survey of XAI in digital pathology
Milda Pocevičiūtė
Gabriel Eilertsen
Claes Lundström
75
56
0
14 Aug 2020
ANDES at SemEval-2020 Task 12: A jointly-trained BERT multilingual model
  for offensive language detection
ANDES at SemEval-2020 Task 12: A jointly-trained BERT multilingual model for offensive language detection
Juan Manuel Pérez
Aymé Arango
Franco Luque
VLM
31
4
0
13 Aug 2020
The Language Interpretability Tool: Extensible, Interactive
  Visualizations and Analysis for NLP Models
The Language Interpretability Tool: Extensible, Interactive Visualizations and Analysis for NLP Models
Ian Tenney
James Wexler
Jasmijn Bastings
Tolga Bolukbasi
Andy Coenen
...
Ellen Jiang
Mahima Pushkarna
Carey Radebaugh
Emily Reif
Ann Yuan
VLM
130
196
0
12 Aug 2020
Reliable Post hoc Explanations: Modeling Uncertainty in Explainability
Reliable Post hoc Explanations: Modeling Uncertainty in Explainability
Dylan Slack
Sophie Hilgard
Sameer Singh
Himabindu Lakkaraju
FAtt
140
163
0
11 Aug 2020
Predicting Risk of Developing Diabetic Retinopathy using Deep Learning
Predicting Risk of Developing Diabetic Retinopathy using Deep Learning
Ashish Bora
Siva Balasubramanian
Boris Babenko
S. Virmani
Subhashini Venugopalan
...
D. Webster
A. Varadarajan
N. Hammel
Yun-Hui Liu
Pinal Bavishi
33
146
0
10 Aug 2020
On Commonsense Cues in BERT for Solving Commonsense Tasks
On Commonsense Cues in BERT for Solving Commonsense Tasks
Leyang Cui
Sijie Cheng
Yu Wu
Yue Zhang
SSLCMLLRM
57
15
0
10 Aug 2020
Assessing the (Un)Trustworthiness of Saliency Maps for Localizing
  Abnormalities in Medical Imaging
Assessing the (Un)Trustworthiness of Saliency Maps for Localizing Abnormalities in Medical Imaging
N. Arun
N. Gaw
P. Singh
Ken Chang
M. Aggarwal
...
J. Patel
M. Gidwani
Julius Adebayo
M. D. Li
Jayashree Kalpathy-Cramer
FAtt
105
110
0
06 Aug 2020
Axiom-based Grad-CAM: Towards Accurate Visualization and Explanation of
  CNNs
Axiom-based Grad-CAM: Towards Accurate Visualization and Explanation of CNNs
Ruigang Fu
Qingyong Hu
Xiaohu Dong
Yulan Guo
Yinghui Gao
Biao Li
FAtt
97
272
0
05 Aug 2020
Previous
123...474849...565758
Next