ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1703.01365
  4. Cited By
Axiomatic Attribution for Deep Networks
v1v2 (latest)

Axiomatic Attribution for Deep Networks

4 March 2017
Mukund Sundararajan
Ankur Taly
Qiqi Yan
    OODFAtt
ArXiv (abs)PDFHTML

Papers citing "Axiomatic Attribution for Deep Networks"

50 / 2,871 papers shown
Title
Normal vs. Adversarial: Salience-based Analysis of Adversarial Samples
  for Relation Extraction
Normal vs. Adversarial: Salience-based Analysis of Adversarial Samples for Relation Extraction
Luoqiu Li
Xiang Chen
Zhen Bi
Xin Xie
Shumin Deng
Ningyu Zhang
Chuanqi Tan
Mosha Chen
Huajun Chen
AAML
112
7
0
01 Apr 2021
NetAdaptV2: Efficient Neural Architecture Search with Fast Super-Network
  Training and Architecture Optimization
NetAdaptV2: Efficient Neural Architecture Search with Fast Super-Network Training and Architecture Optimization
Tien-Ju Yang
Yi-Lun Liao
Vivienne Sze
118
57
0
31 Mar 2021
Modeling Users and Online Communities for Abuse Detection: A Position on
  Ethics and Explainability
Modeling Users and Online Communities for Abuse Detection: A Position on Ethics and Explainability
Pushkar Mishra
H. Yannakoudakis
Ekaterina Shutova
64
7
0
31 Mar 2021
Trusted Artificial Intelligence: Towards Certification of Machine
  Learning Applications
Trusted Artificial Intelligence: Towards Certification of Machine Learning Applications
P. M. Winter
Sebastian K. Eder
J. Weissenbock
Christoph Schwald
Thomas Doms
Tom Vogt
Sepp Hochreiter
Bernhard Nessler
113
25
0
31 Mar 2021
Neural Response Interpretation through the Lens of Critical Pathways
Neural Response Interpretation through the Lens of Critical Pathways
Ashkan Khakzar
Soroosh Baselizadeh
Saurabh Khanduja
Christian Rupprecht
Seong Tae Kim
Nassir Navab
58
34
0
31 Mar 2021
MISA: Online Defense of Trojaned Models using Misattributions
MISA: Online Defense of Trojaned Models using Misattributions
Panagiota Kiourti
Wenchao Li
Anirban Roy
Karan Sikka
Susmit Jha
56
10
0
29 Mar 2021
Generic Attention-model Explainability for Interpreting Bi-Modal and
  Encoder-Decoder Transformers
Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers
Hila Chefer
Shir Gur
Lior Wolf
ViT
103
328
0
29 Mar 2021
Automated freezing of gait assessment with marker-based motion capture
  and multi-stage spatial-temporal graph convolutional neural networks
Automated freezing of gait assessment with marker-based motion capture and multi-stage spatial-temporal graph convolutional neural networks
Benjamin Filtjens
Pieter Ginis
A. Nieuwboer
P. Slaets
Bart Vanrumste
30
21
0
29 Mar 2021
Efficient Explanations from Empirical Explainers
Efficient Explanations from Empirical Explainers
Robert Schwarzenberg
Nils Feldhus
Sebastian Möller
FAtt
93
9
0
29 Mar 2021
Building Reliable Explanations of Unreliable Neural Networks: Locally
  Smoothing Perspective of Model Interpretation
Building Reliable Explanations of Unreliable Neural Networks: Locally Smoothing Perspective of Model Interpretation
Dohun Lim
Hyeonseok Lee
Sungchan Kim
FAttAAML
68
13
0
26 Mar 2021
Preserve, Promote, or Attack? GNN Explanation via Topology Perturbation
Preserve, Promote, or Attack? GNN Explanation via Topology Perturbation
Yi Sun
Abel N. Valente
Sijia Liu
Dakuo Wang
AAML
71
7
0
25 Mar 2021
Group-CAM: Group Score-Weighted Visual Explanations for Deep
  Convolutional Networks
Group-CAM: Group Score-Weighted Visual Explanations for Deep Convolutional Networks
Qing-Long Zhang
Lu Rao
Yubin Yang
71
59
0
25 Mar 2021
ECINN: Efficient Counterfactuals from Invertible Neural Networks
ECINN: Efficient Counterfactuals from Invertible Neural Networks
Frederik Hvilshoj
Alexandros Iosifidis
Ira Assent
BDL
77
26
0
25 Mar 2021
Symmetry-Preserving Paths in Integrated Gradients
Symmetry-Preserving Paths in Integrated Gradients
Miguel A. Lerma
Mirtha Lucas
36
4
0
25 Mar 2021
SelfExplain: A Self-Explaining Architecture for Neural Text Classifiers
SelfExplain: A Self-Explaining Architecture for Neural Text Classifiers
Dheeraj Rajagopal
Vidhisha Balachandran
Eduard H. Hovy
Yulia Tsvetkov
MILMSSLFAttAI4TS
90
67
0
23 Mar 2021
Weakly Supervised Recovery of Semantic Attributes
Weakly Supervised Recovery of Semantic Attributes
Ameen Ali
Tomer Galanti
Evgeniy Zheltonozhskiy
Chaim Baskin
Lior Wolf
59
0
0
22 Mar 2021
Interpreting Deep Learning Models with Marginal Attribution by
  Conditioning on Quantiles
Interpreting Deep Learning Models with Marginal Attribution by Conditioning on Quantiles
M. Merz
Ronald Richman
A. Tsanakas
M. Wüthrich
FAtt
32
11
0
22 Mar 2021
ExAD: An Ensemble Approach for Explanation-based Adversarial Detection
ExAD: An Ensemble Approach for Explanation-based Adversarial Detection
R. Vardhan
Ninghao Liu
Phakpoom Chinprutthiwong
Weijie Fu
Zhen Hu
Helen Zhou
G. Gu
AAML
125
4
0
22 Mar 2021
Robust Models Are More Interpretable Because Attributions Look Normal
Robust Models Are More Interpretable Because Attributions Look Normal
Zifan Wang
Matt Fredrikson
Anupam Datta
OODFAtt
83
26
0
20 Mar 2021
Local Interpretations for Explainable Natural Language Processing: A
  Survey
Local Interpretations for Explainable Natural Language Processing: A Survey
Siwen Luo
Hamish Ivison
S. Han
Josiah Poon
MILM
120
52
0
20 Mar 2021
Interpretable Deep Learning: Interpretation, Interpretability,
  Trustworthiness, and Beyond
Interpretable Deep Learning: Interpretation, Interpretability, Trustworthiness, and Beyond
Xuhong Li
Haoyi Xiong
Xingjian Li
Xuanyu Wu
Xiao Zhang
Ji Liu
Jiang Bian
Dejing Dou
AAMLFaMLXAIHAI
82
344
0
19 Mar 2021
Noise Modulation: Let Your Model Interpret Itself
Noise Modulation: Let Your Model Interpret Itself
Haoyang Li
Xinggang Wang
FAttAAML
91
0
0
19 Mar 2021
Refining Language Models with Compositional Explanations
Refining Language Models with Compositional Explanations
Huihan Yao
Ying Chen
Qinyuan Ye
Xisen Jin
Xiang Ren
89
36
0
18 Mar 2021
Neural Network Attribution Methods for Problems in Geoscience: A Novel
  Synthetic Benchmark Dataset
Neural Network Attribution Methods for Problems in Geoscience: A Novel Synthetic Benchmark Dataset
Antonios Mamalakis
I. Ebert‐Uphoff
E. Barnes
OOD
103
77
0
18 Mar 2021
Linear Iterative Feature Embedding: An Ensemble Framework for
  Interpretable Model
Linear Iterative Feature Embedding: An Ensemble Framework for Interpretable Model
Agus Sudjianto
Jinwen Qiu
Miaoqi Li
Jie Chen
32
3
0
18 Mar 2021
EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural
  Networks by Examining Differential Feature Symmetry
EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural Networks by Examining Differential Feature Symmetry
Yingqi Liu
Guangyu Shen
Guanhong Tao
Zhenting Wang
Shiqing Ma
Xinming Zhang
AAML
97
8
0
16 Mar 2021
DiaRet: A browser-based application for the grading of Diabetic
  Retinopathy with Integrated Gradients
DiaRet: A browser-based application for the grading of Diabetic Retinopathy with Integrated Gradients
Shaswat Patel
Maithili Lohakare
Samyak Prajapati
Shaanya Singh
Nancy Patel
MedIm
19
4
0
15 Mar 2021
CACTUS: Detecting and Resolving Conflicts in Objective Functions
CACTUS: Detecting and Resolving Conflicts in Objective Functions
Subhajit Das
Alex Endert
53
0
0
13 Mar 2021
A Unified Game-Theoretic Interpretation of Adversarial Robustness
A Unified Game-Theoretic Interpretation of Adversarial Robustness
Jie Ren
Die Zhang
Yisen Wang
Lu Chen
Zhanpeng Zhou
...
Xu Cheng
Xin Eric Wang
Meng Zhou
Jie Shi
Quanshi Zhang
AAML
136
23
0
12 Mar 2021
Towards Interpreting and Mitigating Shortcut Learning Behavior of NLU
  Models
Towards Interpreting and Mitigating Shortcut Learning Behavior of NLU Models
Mengnan Du
Varun Manjunatha
R. Jain
Ruchi Deshpande
Franck Dernoncourt
Jiuxiang Gu
Tong Sun
Helen Zhou
110
107
0
11 Mar 2021
Interpretable Machine Learning: Moving From Mythos to Diagnostics
Interpretable Machine Learning: Moving From Mythos to Diagnostics
Valerie Chen
Jeffrey Li
Joon Sik Kim
Gregory Plumb
Ameet Talwalkar
74
31
0
10 Mar 2021
Deepfake Videos in the Wild: Analysis and Detection
Deepfake Videos in the Wild: Analysis and Detection
Jiameng Pu
Neal Mangaokar
Lauren Kelly
P. Bhattacharya
Kavya Sundaram
M. Javed
Bolun Wang
Bimal Viswanath
74
45
0
07 Mar 2021
Explanations for Occluded Images
Explanations for Occluded Images
Hana Chockler
Daniel Kroening
Youcheng Sun
143
21
0
05 Mar 2021
Human-Understandable Decision Making for Visual Recognition
Human-Understandable Decision Making for Visual Recognition
Xiaowei Zhou
Jie Yin
Ivor Tsang
Chen Wang
FAttHAI
54
1
0
05 Mar 2021
Learning to Predict with Supporting Evidence: Applications to Clinical
  Risk Prediction
Learning to Predict with Supporting Evidence: Applications to Clinical Risk Prediction
Aniruddh Raghu
John Guttag
K. Young
E. Pomerantsev
Adrian Dalca
Collin M. Stultz
43
9
0
04 Mar 2021
GLAMOUR: Graph Learning over Macromolecule Representations
GLAMOUR: Graph Learning over Macromolecule Representations
Somesh Mohapatra
Joyce An
Rafael Gómez-Bombarelli
AI4CE
13
2
0
03 Mar 2021
ICAM-reg: Interpretable Classification and Regression with Feature
  Attribution for Mapping Neurological Phenotypes in Individual Scans
ICAM-reg: Interpretable Classification and Regression with Feature Attribution for Mapping Neurological Phenotypes in Individual Scans
Cher Bass
Mariana da Silva
Carole Sudre
Logan Z. J. Williams
Petru-Daniel Tudosiu
F. Alfaro-Almagro
S. Fitzgibbon
M. Glasser
Stephen M. Smith
E. C. Robinson
69
27
0
03 Mar 2021
An Interpretable Multiple-Instance Approach for the Detection of
  referable Diabetic Retinopathy from Fundus Images
An Interpretable Multiple-Instance Approach for the Detection of referable Diabetic Retinopathy from Fundus Images
Alexandros Papadopoulos
F. Topouzis
A. Delopoulos
64
29
0
02 Mar 2021
Interpretable Artificial Intelligence through the Lens of Feature
  Interaction
Interpretable Artificial Intelligence through the Lens of Feature Interaction
Michael Tsang
James Enouen
Yan Liu
FAtt
41
8
0
01 Mar 2021
Reasons, Values, Stakeholders: A Philosophical Framework for Explainable
  Artificial Intelligence
Reasons, Values, Stakeholders: A Philosophical Framework for Explainable Artificial Intelligence
Atoosa Kasirzadeh
70
24
0
01 Mar 2021
Axiomatic Explanations for Visual Search, Retrieval, and Similarity
  Learning
Axiomatic Explanations for Visual Search, Retrieval, and Similarity Learning
Mark Hamilton
Scott M. Lundberg
Lei Zhang
Stephanie Fu
William T. Freeman
FAtt
87
10
0
28 Feb 2021
CXR-Net: An Artificial Intelligence Pipeline for Quick Covid-19
  Screening of Chest X-Rays
CXR-Net: An Artificial Intelligence Pipeline for Quick Covid-19 Screening of Chest X-Rays
H. Abdulah
B. Huber
Sinan Lal
H. Abdallah
L. Palese
H. Soltanian-Zadeh
D. Gatti
76
4
0
26 Feb 2021
PredDiff: Explanations and Interactions from Conditional Expectations
PredDiff: Explanations and Interactions from Conditional Expectations
Stefan Blücher
Johanna Vielhaben
Nils Strodthoff
FAtt
50
19
0
26 Feb 2021
Benchmarking and Survey of Explanation Methods for Black Box Models
Benchmarking and Survey of Explanation Methods for Black Box Models
F. Bodria
F. Giannotti
Riccardo Guidotti
Francesca Naretto
D. Pedreschi
S. Rinzivillo
XAI
127
234
0
25 Feb 2021
Do Input Gradients Highlight Discriminative Features?
Do Input Gradients Highlight Discriminative Features?
Harshay Shah
Prateek Jain
Praneeth Netrapalli
AAMLFAtt
120
59
0
25 Feb 2021
On the Impact of Interpretability Methods in Active Image Augmentation
  Method
On the Impact of Interpretability Methods in Active Image Augmentation Method
F. Santos
Cleber Zanchettin
L. Matos
P. Novais
AAML
35
2
0
24 Feb 2021
LRG at SemEval-2021 Task 4: Improving Reading Comprehension with
  Abstract Words using Augmentation, Linguistic Features and Voting
LRG at SemEval-2021 Task 4: Improving Reading Comprehension with Abstract Words using Augmentation, Linguistic Features and Voting
Abheesht Sharma
Harshit Pandey
Gunjan Chhablani
Yash Bhartia
T. Dash
52
1
0
24 Feb 2021
NLRG at SemEval-2021 Task 5: Toxic Spans Detection Leveraging BERT-based
  Token Classification and Span Prediction Techniques
NLRG at SemEval-2021 Task 5: Toxic Spans Detection Leveraging BERT-based Token Classification and Span Prediction Techniques
Gunjan Chhablani
Abheesht Sharma
Harshit Pandey
Yash Bhartia
S. Suthaharan
35
14
0
24 Feb 2021
Rethinking Natural Adversarial Examples for Classification Models
Rethinking Natural Adversarial Examples for Classification Models
Xiao-Li Li
Jianmin Li
Ting Dai
Jie Shi
Jun Zhu
Xiaolin Hu
AAMLVLM
128
13
0
23 Feb 2021
Resilience of Bayesian Layer-Wise Explanations under Adversarial Attacks
Resilience of Bayesian Layer-Wise Explanations under Adversarial Attacks
Ginevra Carbone
G. Sanguinetti
Luca Bortolussi
FAttAAML
78
4
0
22 Feb 2021
Previous
123...434445...565758
Next