ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1703.01365
  4. Cited By
Axiomatic Attribution for Deep Networks
v1v2 (latest)

Axiomatic Attribution for Deep Networks

4 March 2017
Mukund Sundararajan
Ankur Taly
Qiqi Yan
    OODFAtt
ArXiv (abs)PDFHTML

Papers citing "Axiomatic Attribution for Deep Networks"

50 / 2,871 papers shown
Title
Protecting Publicly Available Data With Machine Learning Shortcuts
Protecting Publicly Available Data With Machine Learning Shortcuts
Nicolas Müller
Maximilian Burgert
Pascal Debus
Jennifer Williams
Philip Sperl
Konstantin Böttinger
65
0
0
30 Oct 2023
TempME: Towards the Explainability of Temporal Graph Neural Networks via
  Motif Discovery
TempME: Towards the Explainability of Temporal Graph Neural Networks via Motif Discovery
Jialin Chen
Rex Ying
AI4TS
59
24
0
30 Oct 2023
D4Explainer: In-Distribution GNN Explanations via Discrete Denoising
  Diffusion
D4Explainer: In-Distribution GNN Explanations via Discrete Denoising Diffusion
Jialin Chen
Shirley Wu
Abhijit Gupta
Rex Ying
DiffM
57
5
0
30 Oct 2023
This Looks Like Those: Illuminating Prototypical Concepts Using Multiple
  Visualizations
This Looks Like Those: Illuminating Prototypical Concepts Using Multiple Visualizations
Chiyu Ma
Brandon Zhao
Chaofan Chen
Cynthia Rudin
82
29
0
28 Oct 2023
Visual Explanations via Iterated Integrated Attributions
Visual Explanations via Iterated Integrated Attributions
Oren Barkan
Yehonatan Elisha
Yuval Asher
Amit Eshel
Noam Koenigstein
FAttXAI
47
18
0
28 Oct 2023
Sample based Explanations via Generalized Representers
Sample based Explanations via Generalized Representers
Che-Ping Tsai
Chih-Kuan Yeh
Pradeep Ravikumar
FAtt
95
9
0
27 Oct 2023
Understanding Parameter Saliency via Extreme Value Theory
Understanding Parameter Saliency via Extreme Value Theory
Shuo Wang
Issei Sato
AAMLFAtt
36
0
0
27 Oct 2023
A Comprehensive and Reliable Feature Attribution Method: Double-sided
  Remove and Reconstruct (DoRaR)
A Comprehensive and Reliable Feature Attribution Method: Double-sided Remove and Reconstruct (DoRaR)
Dong Qin
G. Amariucai
Daji Qiao
Yong Guan
Shen Fu
134
5
0
27 Oct 2023
A Survey on Transferability of Adversarial Examples across Deep Neural
  Networks
A Survey on Transferability of Adversarial Examples across Deep Neural Networks
Jindong Gu
Xiaojun Jia
Pau de Jorge
Wenqain Yu
Xinwei Liu
...
Anjun Hu
Ashkan Khakzar
Zhijiang Li
Xiaochun Cao
Philip Torr
AAML
120
31
0
26 Oct 2023
SoK: Pitfalls in Evaluating Black-Box Attacks
SoK: Pitfalls in Evaluating Black-Box Attacks
Fnu Suya
Anshuman Suri
Tingwei Zhang
Jingtao Hong
Yuan Tian
David Evans
AAML
104
6
0
26 Oct 2023
This Reads Like That: Deep Learning for Interpretable Natural Language
  Processing
This Reads Like That: Deep Learning for Interpretable Natural Language Processing
Claudio Fanconi
Moritz Vandenhirtz
Severin Husmann
Julia E. Vogt
FAtt
70
2
0
25 Oct 2023
PROMINET: Prototype-based Multi-View Network for Interpretable Email
  Response Prediction
PROMINET: Prototype-based Multi-View Network for Interpretable Email Response Prediction
Yuqing Wang
Prashanth Vijayaraghavan
Ehsan Degan
63
4
0
25 Oct 2023
Learning to Explain: A Model-Agnostic Framework for Explaining Black Box
  Models
Learning to Explain: A Model-Agnostic Framework for Explaining Black Box Models
Oren Barkan
Yuval Asher
Amit Eshel
Yehonatan Elisha
Noam Koenigstein
75
5
0
25 Oct 2023
On the stability, correctness and plausibility of visual explanation
  methods based on feature importance
On the stability, correctness and plausibility of visual explanation methods based on feature importance
Romain Xu-Darme
Jenny Benois-Pineau
R. Giot
Georges Quénot
Zakaria Chihani
M. Rousset
Alexey Zhukov
XAIFAtt
78
1
0
25 Oct 2023
Sanity checks for patch visualisation in prototype-based image
  classification
Sanity checks for patch visualisation in prototype-based image classification
Romain Xu-Darme
Georges Quénot
Zakaria Chihani
M. Rousset
58
6
0
25 Oct 2023
Corrupting Neuron Explanations of Deep Visual Features
Corrupting Neuron Explanations of Deep Visual Features
Divyansh Srivastava
Tuomas P. Oikarinen
Tsui-Wei Weng
FAttAAML
44
2
0
25 Oct 2023
Instance-wise Linearization of Neural Network for Model Interpretation
Instance-wise Linearization of Neural Network for Model Interpretation
Zhimin Li
Shusen Liu
B. Kailkhura
Timo Bremer
Valerio Pascucci
MILMFAtt
64
0
0
25 Oct 2023
Sum-of-Parts: Self-Attributing Neural Networks with End-to-End Learning of Feature Groups
Sum-of-Parts: Self-Attributing Neural Networks with End-to-End Learning of Feature Groups
Weiqiu You
Helen Qu
Marco Gatti
Bhuvnesh Jain
Eric Wong
FAttFaML
97
3
0
25 Oct 2023
Contrastive Learning-based Sentence Encoders Implicitly Weight
  Informative Words
Contrastive Learning-based Sentence Encoders Implicitly Weight Informative Words
Hiroto Kurita
Goro Kobayashi
Sho Yokoi
Kentaro Inui
64
4
0
24 Oct 2023
Climate Change Impact on Agricultural Land Suitability: An Interpretable
  Machine Learning-Based Eurasia Case Study
Climate Change Impact on Agricultural Land Suitability: An Interpretable Machine Learning-Based Eurasia Case Study
Valeriy Shevchenko
Daria Taniushkina
Aleksander Lukashevich
Aleksandr Bulkin
Roland Grinis
Kirill Kovalev
Veronika Narozhnaia
Nazar Sotiriadi
Alexander Krenke
Yury Maximov
AI4CE
43
7
0
24 Oct 2023
Deep Integrated Explanations
Deep Integrated Explanations
Oren Barkan
Yehonatan Elisha
Jonathan Weill
Yuval Asher
Amit Eshel
Noam Koenigstein
FAtt
107
7
0
23 Oct 2023
XTSC-Bench: Quantitative Benchmarking for Explainers on Time Series
  Classification
XTSC-Bench: Quantitative Benchmarking for Explainers on Time Series Classification
Jacqueline Höllig
Steffen Thoma
Florian Grimm
AI4TS
62
1
0
23 Oct 2023
Cross-Modal Conceptualization in Bottleneck Models
Cross-Modal Conceptualization in Bottleneck Models
Danis Alukaev
S. Kiselev
Ilya Pershin
Bulat Ibragimov
Vladimir Ivanov
Alexey Kornaev
Ivan Titov
78
7
0
23 Oct 2023
REFER: An End-to-end Rationale Extraction Framework for Explanation
  Regularization
REFER: An End-to-end Rationale Extraction Framework for Explanation Regularization
Mohammad Reza Ghasemi Madani
Pasquale Minervini
91
4
0
22 Oct 2023
Preference Elicitation with Soft Attributes in Interactive
  Recommendation
Preference Elicitation with Soft Attributes in Interactive Recommendation
Erdem Biyik
Fan Yao
Yinlam Chow
Alex Haig
Chih-Wei Hsu
Mohammad Ghavamzadeh
Craig Boutilier
135
4
0
22 Oct 2023
Make Your Decision Convincing! A Unified Two-Stage Framework:
  Self-Attribution and Decision-Making
Make Your Decision Convincing! A Unified Two-Stage Framework: Self-Attribution and Decision-Making
Yanrui Du
Sendong Zhao
Hao Wang
Yuhan Chen
Rui Bai
Zewen Qiang
Muzhen Cai
Bing Qin
64
0
0
20 Oct 2023
Does Your Model Think Like an Engineer? Explainable AI for Bearing Fault
  Detection with Deep Learning
Does Your Model Think Like an Engineer? Explainable AI for Bearing Fault Detection with Deep Learning
Thomas Decker
Michael Lebacher
Volker Tresp
31
13
0
19 Oct 2023
Transformer-based Entity Legal Form Classification
Transformer-based Entity Legal Form Classification
Alexander Arimond
Mauro Molteni
Dominik Jany
Zornitsa Manolova
Damian Borth
Andreas G. F. Hoepner
MedImAILaw
54
1
0
19 Oct 2023
SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency
  in Both Image Classification and Generation
SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation
Chongyu Fan
Jiancheng Liu
Yihua Zhang
Eric Wong
Dennis Wei
Sijia Liu
MU
143
150
0
19 Oct 2023
MARVEL: Multi-Agent Reinforcement-Learning for Large-Scale Variable
  Speed Limits
MARVEL: Multi-Agent Reinforcement-Learning for Large-Scale Variable Speed Limits
Yuhang Zhang
Marcos Quiñones-Grueiro
Zhiyao Zhang
Yanbing Wang
William Barbour
Gautam Biswas
Dan Work
48
5
0
18 Oct 2023
A Tale of Pronouns: Interpretability Informs Gender Bias Mitigation for
  Fairer Instruction-Tuned Machine Translation
A Tale of Pronouns: Interpretability Informs Gender Bias Mitigation for Fairer Instruction-Tuned Machine Translation
Giuseppe Attanasio
Flor Miriam Plaza del Arco
Debora Nozza
Anne Lauscher
72
19
0
18 Oct 2023
From Neural Activations to Concepts: A Survey on Explaining Concepts in
  Neural Networks
From Neural Activations to Concepts: A Survey on Explaining Concepts in Neural Networks
Jae Hee Lee
Sergio Lanza
Stefan Wermter
73
10
0
18 Oct 2023
From Dissonance to Insights: Dissecting Disagreements in Rationale
  Construction for Case Outcome Classification
From Dissonance to Insights: Dissecting Disagreements in Rationale Construction for Case Outcome Classification
Shanshan Xu
Santosh T.Y.S.S
O. Ichim
Isabella Risini
Barbara Plank
Matthias Grabmair
AILaw
116
12
0
18 Oct 2023
VECHR: A Dataset for Explainable and Robust Classification of
  Vulnerability Type in the European Court of Human Rights
VECHR: A Dataset for Explainable and Robust Classification of Vulnerability Type in the European Court of Human Rights
Shanshan Xu
Leon Staufer
Santosh T.Y.S.S
O. Ichim
Corina Heri
Matthias Grabmair
50
0
0
17 Oct 2023
Can Large Language Models Explain Themselves? A Study of LLM-Generated
  Self-Explanations
Can Large Language Models Explain Themselves? A Study of LLM-Generated Self-Explanations
Shiyuan Huang
Siddarth Mamidanna
Shreedhar Jangam
Yilun Zhou
Leilani H. Gilpin
LRMMILMELM
116
77
0
17 Oct 2023
Nonet at SemEval-2023 Task 6: Methodologies for Legal Evaluation
Nonet at SemEval-2023 Task 6: Methodologies for Legal Evaluation
S. Nigam
Aniket Deroy
Noel Shallum
Ayush Kumar Mishra
Anup Roy
Shubham Kumar Mishra
Arnab Bhattacharya
Saptarshi Ghosh
Kripabandhu Ghosh
AILawELM
80
11
0
17 Oct 2023
Learning optimal integration of spatial and temporal information in
  noisy chemotaxis
Learning optimal integration of spatial and temporal information in noisy chemotaxis
Albert Alonso
J. B. Kirkegaard
54
4
0
16 Oct 2023
DANAA: Towards transferable attacks with double adversarial neuron
  attribution
DANAA: Towards transferable attacks with double adversarial neuron attribution
Zhibo Jin
Zhiyu Zhu
Xinyi Wang
Jiayu Zhang
Jun Shen
Huaming Chen
AAML
66
10
0
16 Oct 2023
Transparent Anomaly Detection via Concept-based Explanations
Transparent Anomaly Detection via Concept-based Explanations
Laya Rafiee Sevyeri
Ivaxi Sheth
Farhood Farahnak
Samira Ebrahimi Kahou
S. Enger
60
4
0
16 Oct 2023
LICO: Explainable Models with Language-Image Consistency
LICO: Explainable Models with Language-Image Consistency
Yiming Lei
Zilong Li
Yangyang Li
Junping Zhang
Hongming Shan
VLMFAtt
53
7
0
15 Oct 2023
Assessing the Reliability of Large Language Model Knowledge
Assessing the Reliability of Large Language Model Knowledge
Weixuan Wang
Barry Haddow
Alexandra Birch
Wei Peng
KELMHILM
106
15
0
15 Oct 2023
Notes on Applicability of Explainable AI Methods to Machine Learning
  Models Using Features Extracted by Persistent Homology
Notes on Applicability of Explainable AI Methods to Machine Learning Models Using Features Extracted by Persistent Homology
Naofumi Hama
89
0
0
15 Oct 2023
Interpretable Diffusion via Information Decomposition
Interpretable Diffusion via Information Decomposition
Xianghao Kong
Ollie Liu
Han Li
Dani Yogatama
Greg Ver Steeg
107
22
0
12 Oct 2023
Faithfulness Measurable Masked Language Models
Faithfulness Measurable Masked Language Models
Andreas Madsen
Siva Reddy
Sarath Chandar
85
3
0
11 Oct 2023
Human-Centered Evaluation of XAI Methods
Human-Centered Evaluation of XAI Methods
Karam Dawoud
Wojciech Samek
Peter Eisert
Sebastian Lapuschkin
Sebastian Bosse
66
4
0
11 Oct 2023
NeuroInspect: Interpretable Neuron-based Debugging Framework through
  Class-conditional Visualizations
NeuroInspect: Interpretable Neuron-based Debugging Framework through Class-conditional Visualizations
Yeong-Joon Ju
Ji-Hoon Park
Seong-Whan Lee
AAML
46
0
0
11 Oct 2023
Comparing Styles across Languages: A Cross-Cultural Exploration of Politeness
Comparing Styles across Languages: A Cross-Cultural Exploration of Politeness
Shreya Havaldar
Matthew Pressimone
Eric Wong
Lyle Ungar
125
2
0
11 Oct 2023
Evaluating Explanation Methods for Vision-and-Language Navigation
Evaluating Explanation Methods for Vision-and-Language Navigation
Guanqi Chen
Lei Yang
Guanhua Chen
Jia Pan
XAI
65
1
0
10 Oct 2023
AttributionLab: Faithfulness of Feature Attribution Under Controllable
  Environments
AttributionLab: Faithfulness of Feature Attribution Under Controllable Environments
Yang Zhang
Yawei Li
Hannah Brown
Mina Rezaei
Bernd Bischl
Philip Torr
Ashkan Khakzar
Kenji Kawaguchi
OOD
81
2
0
10 Oct 2023
Interpreting CLIP's Image Representation via Text-Based Decomposition
Interpreting CLIP's Image Representation via Text-Based Decomposition
Yossi Gandelsman
Alexei A. Efros
Jacob Steinhardt
VLM
98
101
0
09 Oct 2023
Previous
123...171819...565758
Next