ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1703.01365
  4. Cited By
Axiomatic Attribution for Deep Networks
v1v2 (latest)

Axiomatic Attribution for Deep Networks

4 March 2017
Mukund Sundararajan
Ankur Taly
Qiqi Yan
    OODFAtt
ArXiv (abs)PDFHTML

Papers citing "Axiomatic Attribution for Deep Networks"

50 / 2,873 papers shown
Title
Contrastive Corpus Attribution for Explaining Representations
Contrastive Corpus Attribution for Explaining Representations
Christy Lin
Hugh Chen
Chanwoo Kim
Su-In Lee
SSL
61
8
0
30 Sep 2022
Evaluation of importance estimators in deep learning classifiers for
  Computed Tomography
Evaluation of importance estimators in deep learning classifiers for Computed Tomography
L. Brocki
Wistan Marchadour
Jonas Maison
B. Badic
P. Papadimitroulas
M. Hatt
Franck Vermet
N. C. Chung
51
4
0
30 Sep 2022
Sequential Attention for Feature Selection
Sequential Attention for Feature Selection
T. Yasuda
M. Bateni
Lin Chen
Matthew Fahrbach
Gang Fu
Vahab Mirrokni
110
11
0
29 Sep 2022
Causal Proxy Models for Concept-Based Model Explanations
Causal Proxy Models for Concept-Based Model Explanations
Zhengxuan Wu
Karel DÓosterlinck
Atticus Geiger
Amir Zur
Christopher Potts
MILM
132
37
0
28 Sep 2022
WeightedSHAP: analyzing and improving Shapley based feature attributions
WeightedSHAP: analyzing and improving Shapley based feature attributions
Yongchan Kwon
James Zou
TDIFAtt
95
37
0
27 Sep 2022
BayesNetCNN: incorporating uncertainty in neural networks for
  image-based classification tasks
BayesNetCNN: incorporating uncertainty in neural networks for image-based classification tasks
Matteo Ferrante
T. Boccato
N. Toschi
BDLUQCV
57
0
0
27 Sep 2022
Ablation Path Saliency
Ablation Path Saliency
Justus Sagemüller
Olivier Verdier
FAttAAML
55
0
0
26 Sep 2022
Dead or Murdered? Predicting Responsibility Perception in Femicide News
  Reports
Dead or Murdered? Predicting Responsibility Perception in Femicide News Reports
Gosse Minnema
Sara Gemelli
C. Zanchi
Tommaso Caselli
Malvina Nissim
48
6
0
24 Sep 2022
I-SPLIT: Deep Network Interpretability for Split Computing
I-SPLIT: Deep Network Interpretability for Split Computing
Federico Cunico
Luigi Capogrosso
Francesco Setti
D. Carra
Franco Fummi
Marco Cristani
99
14
0
23 Sep 2022
Towards Faithful Model Explanation in NLP: A Survey
Towards Faithful Model Explanation in NLP: A Survey
Qing Lyu
Marianna Apidianaki
Chris Callison-Burch
XAI
246
121
0
22 Sep 2022
Concept Activation Regions: A Generalized Framework For Concept-Based
  Explanations
Concept Activation Regions: A Generalized Framework For Concept-Based Explanations
Jonathan Crabbé
M. Schaar
127
53
0
22 Sep 2022
Learning Visual Explanations for DCNN-Based Image Classifiers Using an
  Attention Mechanism
Learning Visual Explanations for DCNN-Based Image Classifiers Using an Attention Mechanism
Ioanna Gkartzonika
Nikolaos Gkalelis
Vasileios Mezaris
56
9
0
22 Sep 2022
Scope of Pre-trained Language Models for Detecting Conflicting Health
  Information
Scope of Pre-trained Language Models for Detecting Conflicting Health Information
Josepho D. Gatto
Madhusudan Basak
S. Preum
56
7
0
22 Sep 2022
Fairness Reprogramming
Fairness Reprogramming
Guanhua Zhang
Yihua Zhang
Yang Zhang
Wenqi Fan
Qing Li
Sijia Liu
Shiyu Chang
AAML
213
40
0
21 Sep 2022
Can Shadows Reveal Biometric Information?
Can Shadows Reveal Biometric Information?
Safa C. Medin
Amir Weiss
F. Durand
William T. Freeman
G. Wornell
CVBM
82
2
0
21 Sep 2022
Sparse Interaction Additive Networks via Feature Interaction Detection
  and Sparse Selection
Sparse Interaction Additive Networks via Feature Interaction Detection and Sparse Selection
James Enouen
Yan Liu
77
20
0
19 Sep 2022
Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off
Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off
M. Zarlenga
Pietro Barbiero
Gabriele Ciravegna
G. Marra
Francesco Giannini
...
F. Precioso
S. Melacci
Adrian Weller
Pietro Lio
M. Jamnik
150
59
0
19 Sep 2022
An Overview on the Generation and Detection of Synthetic and Manipulated
  Satellite Images
An Overview on the Generation and Detection of Synthetic and Manipulated Satellite Images
Lydia Abady
E. D. Cannas
Paolo Bestagini
B. Tondi
Stefano Tubaro
Mauro Barni
100
16
0
19 Sep 2022
A model-agnostic approach for generating Saliency Maps to explain
  inferred decisions of Deep Learning Models
A model-agnostic approach for generating Saliency Maps to explain inferred decisions of Deep Learning Models
S. Karatsiolis
A. Kamilaris
FAtt
73
1
0
19 Sep 2022
Domain Classification-based Source-specific Term Penalization for Domain
  Adaptation in Hate-speech Detection
Domain Classification-based Source-specific Term Penalization for Domain Adaptation in Hate-speech Detection
Tulika Bose
Nikolaos Aletras
Irina Illina
Dominique Fohr
88
1
0
18 Sep 2022
Why Deep Surgical Models Fail?: Revisiting Surgical Action Triplet
  Recognition through the Lens of Robustness
Why Deep Surgical Models Fail?: Revisiting Surgical Action Triplet Recognition through the Lens of Robustness
Ya-Hsin Cheng
Lihao Liu
Shujun Wang
Yueming Jin
Carola-Bibiane Schönlieb
Angelica I. Aviles-Rivero
75
7
0
18 Sep 2022
EMaP: Explainable AI with Manifold-based Perturbations
EMaP: Explainable AI with Manifold-based Perturbations
Minh Nhat Vu
Huy Mai
My T. Thai
AAML
72
2
0
18 Sep 2022
NeuCEPT: Locally Discover Neural Networks' Mechanism via Critical
  Neurons Identification with Precision Guarantee
NeuCEPT: Locally Discover Neural Networks' Mechanism via Critical Neurons Identification with Precision Guarantee
Minh Nhat Vu
Truc D. T. Nguyen
My T. Thai
AAML
43
3
0
18 Sep 2022
Machine Reading, Fast and Slow: When Do Models "Understand" Language?
Machine Reading, Fast and Slow: When Do Models "Understand" Language?
Sagnik Ray Choudhury
Anna Rogers
Isabelle Augenstein
LRM
80
18
0
15 Sep 2022
Explainable AI for clinical and remote health applications: a survey on
  tabular and time series data
Explainable AI for clinical and remote health applications: a survey on tabular and time series data
Flavio Di Martino
Franca Delmastro
AI4TS
66
99
0
14 Sep 2022
Concept-Based Explanations for Tabular Data
Concept-Based Explanations for Tabular Data
Varsha Pendyala
Jihye Choi
FaMLXAIFAtt
77
3
0
13 Sep 2022
Saliency Guided Adversarial Training for Learning Generalizable Features
  with Applications to Medical Imaging Classification System
Saliency Guided Adversarial Training for Learning Generalizable Features with Applications to Medical Imaging Classification System
Xin Li
Yao Qiang
Chengyin Li
Sijia Liu
D. Zhu
OODMedIm
84
4
0
09 Sep 2022
Adapting to Non-Centered Languages for Zero-shot Multilingual
  Translation
Adapting to Non-Centered Languages for Zero-shot Multilingual Translation
Zhi Qu
Taro Watanabe
112
7
0
09 Sep 2022
From Shapley Values to Generalized Additive Models and back
From Shapley Values to Generalized Additive Models and back
Sebastian Bordt
U. V. Luxburg
FAttTDI
175
43
0
08 Sep 2022
Responsibility: An Example-based Explainable AI approach via Training
  Process Inspection
Responsibility: An Example-based Explainable AI approach via Training Process Inspection
Faraz Khadivpour
Arghasree Banerjee
Matthew J. Guzdial
XAI
76
2
0
07 Sep 2022
A Survey of Neural Trees
A Survey of Neural Trees
Haoling Li
Mingli Song
Mengqi Xue
Haofei Zhang
Jingwen Ye
Lechao Cheng
Mingli Song
AI4CE
110
6
0
07 Sep 2022
Defending Against Backdoor Attack on Graph Nerual Network by
  Explainability
Defending Against Backdoor Attack on Graph Nerual Network by Explainability
B. Jiang
Zhao Li
AAMLGNN
121
17
0
07 Sep 2022
Self-supervised multimodal neuroimaging yields predictive
  representations for a spectrum of Alzheimer's phenotypes
Self-supervised multimodal neuroimaging yields predictive representations for a spectrum of Alzheimer's phenotypes
A. Fedorov
Eloy P. T. Geenjaar
Lei Wu
Tristan Sylvain
T. DeRamus
Margaux Luck
Maria B. Misiura
R. Devon Hjelm
Sergey Plis
Vince D. Calhoun
53
3
0
07 Sep 2022
Change Detection for Local Explainability in Evolving Data Streams
Change Detection for Local Explainability in Evolving Data Streams
Johannes Haug
Alexander Braun
Stefan Zurn
Gjergji Kasneci
FAtt
60
10
0
06 Sep 2022
TFN: An Interpretable Neural Network with Time-Frequency Transform
  Embedded for Intelligent Fault Diagnosis
TFN: An Interpretable Neural Network with Time-Frequency Transform Embedded for Intelligent Fault Diagnosis
Qian Chen
Xingjian Dong
Guowei Tu
Dong Wang
Baoxuan Zhao
Zhike Peng
AI4CE
48
76
0
05 Sep 2022
Extend and Explain: Interpreting Very Long Language Models
Extend and Explain: Interpreting Very Long Language Models
Joel Stremmel
B. Hill
Jeffrey S. Hertzberg
Jaime Murillo
Llewelyn Allotey
Eran Halperin
59
4
0
02 Sep 2022
Exploring Gradient-based Multi-directional Controls in GANs
Exploring Gradient-based Multi-directional Controls in GANs
Zikun Chen
R. Jiang
Brendan Duke
Han Zhao
P. Aarabi
70
10
0
01 Sep 2022
Concept Gradient: Concept-based Interpretation Without Linear Assumption
Concept Gradient: Concept-based Interpretation Without Linear Assumption
Andrew Bai
Chih-Kuan Yeh
Pradeep Ravikumar
Neil Y. C. Lin
Cho-Jui Hsieh
83
15
0
31 Aug 2022
LUCID: Exposing Algorithmic Bias through Inverse Design
LUCID: Exposing Algorithmic Bias through Inverse Design
Carmen Mazijn
Carina E. A. Prunkl
Andres Algaba
J. Danckaert
Vincent Ginis
SyDa
90
4
0
26 Aug 2022
Towards Benchmarking Explainable Artificial Intelligence Methods
Towards Benchmarking Explainable Artificial Intelligence Methods
Lars Holmberg
30
5
0
25 Aug 2022
Shortcut Learning of Large Language Models in Natural Language
  Understanding
Shortcut Learning of Large Language Models in Natural Language Understanding
Mengnan Du
Fengxiang He
Na Zou
Dacheng Tao
Helen Zhou
KELMOffRL
138
90
0
25 Aug 2022
Anomaly Attribution with Likelihood Compensation
Anomaly Attribution with Likelihood Compensation
T. Idé
Amit Dhurandhar
Jirí Navrátil
Moninder Singh
Naoki Abe
43
15
0
23 Aug 2022
Statistical Aspects of SHAP: Functional ANOVA for Model Interpretation
Statistical Aspects of SHAP: Functional ANOVA for Model Interpretation
Andrew Herren
P. R. Hahn
FAtt
53
10
0
21 Aug 2022
Inferring Sensitive Attributes from Model Explanations
Inferring Sensitive Attributes from Model Explanations
Vasisht Duddu
A. Boutet
MIACVSILM
85
17
0
21 Aug 2022
Causality-Inspired Taxonomy for Explainable Artificial Intelligence
Causality-Inspired Taxonomy for Explainable Artificial Intelligence
Pedro C. Neto
Tiago B. Gonccalves
João Ribeiro Pinto
W. Silva
Ana F. Sequeira
Arun Ross
Jaime S. Cardoso
XAI
112
13
0
19 Aug 2022
SAFARI: Versatile and Efficient Evaluations for Robustness of
  Interpretability
SAFARI: Versatile and Efficient Evaluations for Robustness of Interpretability
Wei Huang
Xingyu Zhao
Gao Jin
Xiaowei Huang
AAML
90
31
0
19 Aug 2022
Evaluating Explainability for Graph Neural Networks
Evaluating Explainability for Graph Neural Networks
Chirag Agarwal
Owen Queen
Himabindu Lakkaraju
Marinka Zitnik
96
112
0
19 Aug 2022
UKP-SQuARE v2: Explainability and Adversarial Attacks for Trustworthy QA
UKP-SQuARE v2: Explainability and Adversarial Attacks for Trustworthy QA
Rachneet Sachdeva
Haritz Puerto
Tim Baumgärtner
Sewin Tariverdian
Hao Zhang
Kexin Wang
H. Saad
Leonardo F. R. Ribeiro
Iryna Gurevych
AAML
80
2
0
19 Aug 2022
Neural Payoff Machines: Predicting Fair and Stable Payoff Allocations
  Among Team Members
Neural Payoff Machines: Predicting Fair and Stable Payoff Allocations Among Team Members
Daphne Cornelisse
Thomas Rood
Mateusz Malinowski
Yoram Bachrach
Tal Kachman
60
10
0
18 Aug 2022
Transcending XAI Algorithm Boundaries through End-User-Inspired Design
Transcending XAI Algorithm Boundaries through End-User-Inspired Design
Weina Jin
Jianyu Fan
D. Gromala
Philippe Pasquier
Xiaoxiao Li
Ghassan Hamarneh
85
3
0
18 Aug 2022
Previous
123...293031...565758
Next