ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1703.01365
  4. Cited By
Axiomatic Attribution for Deep Networks

Axiomatic Attribution for Deep Networks

4 March 2017
Mukund Sundararajan
Ankur Taly
Qiqi Yan
    OOD
    FAtt
ArXivPDFHTML

Papers citing "Axiomatic Attribution for Deep Networks"

50 / 2,822 papers shown
Title
Towards Locally Explaining Prediction Behavior via Gradual Interventions and Measuring Property Gradients
Niklas Penzel
Joachim Denzler
FAtt
50
0
0
07 Mar 2025
A Unified Framework with Novel Metrics for Evaluating the Effectiveness of XAI Techniques in LLMs
A Unified Framework with Novel Metrics for Evaluating the Effectiveness of XAI Techniques in LLMs
Melkamu Mersha
Mesay Gemeda Yigezu
Hassan Shakil
Ali Al shami
SangHyun Byun
Jugal Kalita
62
0
0
06 Mar 2025
Enhancing Network Security Management in Water Systems using FM-based Attack Attribution
Aleksandar Avdalovic
Joseph Khoury
Ahmad Taha
E. Bou-Harb
AAML
49
1
0
03 Mar 2025
Riemannian Integrated Gradients: A Geometric View of Explainable AI
Federico Costanza
Lachlan Simpson
37
0
0
02 Mar 2025
Investigating the Relationship Between Debiasing and Artifact Removal using Saliency Maps
Investigating the Relationship Between Debiasing and Artifact Removal using Saliency Maps
Lukasz Sztukiewicz
Ignacy Stepka
Michał Wiliński
Jerzy Stefanowski
44
0
0
28 Feb 2025
Enhancing Explainability with Multimodal Context Representations for Smarter Robots
Enhancing Explainability with Multimodal Context Representations for Smarter Robots
Anargh Viswanath
Lokesh Veeramacheneni
Hendrik Buschmeier
43
0
0
28 Feb 2025
Foundation-Model-Boosted Multimodal Learning for fMRI-based Neuropathic Pain Drug Response Prediction
Wenrui Fan
L. M. Riza Rizky
Jiayang Zhang
Chen Chen
Haiping Lu
Kevin Teh
Dinesh Selvarajah
Shuo Zhou
42
0
0
28 Feb 2025
FedConv: A Learning-on-Model Paradigm for Heterogeneous Federated Clients
FedConv: A Learning-on-Model Paradigm for Heterogeneous Federated Clients
Leming Shen
Qiang Yang
Kaiyan Cui
Yuanqing Zheng
Xiao-Yong Wei
Jianwei Liu
Jinsong Han
FedML
75
11
0
28 Feb 2025
Interpreting CLIP with Hierarchical Sparse Autoencoders
Interpreting CLIP with Hierarchical Sparse Autoencoders
Vladimir Zaigrajew
Hubert Baniecki
P. Biecek
56
0
0
27 Feb 2025
Show and Tell: Visually Explainable Deep Neural Nets via Spatially-Aware Concept Bottleneck Models
Show and Tell: Visually Explainable Deep Neural Nets via Spatially-Aware Concept Bottleneck Models
Itay Benou
Tammy Riklin-Raviv
67
0
0
27 Feb 2025
QPM: Discrete Optimization for Globally Interpretable Image Classification
QPM: Discrete Optimization for Globally Interpretable Image Classification
Thomas Norrenbrock
T. Kaiser
Sovan Biswas
R. Manuvinakurike
Bodo Rosenhahn
62
0
0
27 Feb 2025
Models That Are Interpretable But Not Transparent
Models That Are Interpretable But Not Transparent
Chudi Zhong
Panyu Chen
Cynthia Rudin
AAML
66
0
0
26 Feb 2025
DBR: Divergence-Based Regularization for Debiasing Natural Language Understanding Models
DBR: Divergence-Based Regularization for Debiasing Natural Language Understanding Models
Zihao Li
Ruixiang Tang
Lu Cheng
S. Wang
Dawei Yin
Jundong Li
75
0
0
25 Feb 2025
LED-Merging: Mitigating Safety-Utility Conflicts in Model Merging with Location-Election-Disjoint
LED-Merging: Mitigating Safety-Utility Conflicts in Model Merging with Location-Election-Disjoint
Qianli Ma
Dongrui Liu
Qian Chen
Linfeng Zhang
Jing Shao
MoMe
168
0
0
24 Feb 2025
Class-Dependent Perturbation Effects in Evaluating Time Series Attributions
Class-Dependent Perturbation Effects in Evaluating Time Series Attributions
Gregor Baer
Isel Grau
Chao Zhang
Pieter Van Gorp
AAML
53
0
0
24 Feb 2025
Interpretable Retinal Disease Prediction Using Biology-Informed Heterogeneous Graph Representations
Interpretable Retinal Disease Prediction Using Biology-Informed Heterogeneous Graph Representations
Laurin Lux
Alexander H. Berger
Maria Romeo Tricas
Alaa E. Fayed
Shri Kiran Srinivasan
Linus Kreitner
Jonas Weidner
M. Menten
Daniel Rueckert
Johannes C. Paetzold
51
2
0
23 Feb 2025
NeurFlow: Interpreting Neural Networks through Neuron Groups and Functional Interactions
NeurFlow: Interpreting Neural Networks through Neuron Groups and Functional Interactions
Tue Cao
Nhat X. Hoang
Hieu H. Pham
P. Nguyen
My T. Thai
90
0
0
22 Feb 2025
A Close Look at Decomposition-based XAI-Methods for Transformer Language Models
A Close Look at Decomposition-based XAI-Methods for Transformer Language Models
L. Arras
Bruno Puri
Patrick Kahardipraja
Sebastian Lapuschkin
Wojciech Samek
46
0
0
21 Feb 2025
SPEX: Scaling Feature Interaction Explanations for LLMs
SPEX: Scaling Feature Interaction Explanations for LLMs
J. S. Kang
Landon Butler
Abhineet Agarwal
Y. E. Erginbas
Ramtin Pedarsani
Kannan Ramchandran
Bin Yu
VLM
LRM
77
0
0
20 Feb 2025
Show Me Your Code! Kill Code Poisoning: A Lightweight Method Based on Code Naturalness
Show Me Your Code! Kill Code Poisoning: A Lightweight Method Based on Code Naturalness
Weisong Sun
Yuchen Chen
Mengzhe Yuan
Chunrong Fang
Zhenpeng Chen
Chong Wang
Yang Liu
Baowen Xu
Zhenyu Chen
AAML
36
1
0
20 Feb 2025
Revisiting the Generalization Problem of Low-level Vision Models Through the Lens of Image Deraining
Jinfan Hu
Zhiyuan You
Jinjin Gu
Kaiwen Zhu
Tianfan Xue
Chao Dong
46
0
0
18 Feb 2025
From Abstract to Actionable: Pairwise Shapley Values for Explainable AI
From Abstract to Actionable: Pairwise Shapley Values for Explainable AI
Jiaxin Xu
Hung Chau
Angela Burden
TDI
50
0
0
18 Feb 2025
Suboptimal Shapley Value Explanations
Suboptimal Shapley Value Explanations
Xiaolei Lu
FAtt
65
0
0
17 Feb 2025
Time-series attribution maps with regularized contrastive learning
Time-series attribution maps with regularized contrastive learning
Steffen Schneider
Rodrigo González Laiz
Anastasiia Filippova
Markus Frey
Mackenzie W. Mathis
BDL
FAtt
CML
AI4TS
78
0
0
17 Feb 2025
Uncertainty-Aware Explanations Through Probabilistic Self-Explainable Neural Networks
Uncertainty-Aware Explanations Through Probabilistic Self-Explainable Neural Networks
Jon Vadillo
Roberto Santana
J. A. Lozano
Marta Z. Kwiatkowska
BDL
AAML
73
0
0
17 Feb 2025
Building Bridges, Not Walls -- Advancing Interpretability by Unifying Feature, Data, and Model Component Attribution
Building Bridges, Not Walls -- Advancing Interpretability by Unifying Feature, Data, and Model Component Attribution
Shichang Zhang
Tessa Han
Usha Bhalla
Hima Lakkaraju
FAtt
150
0
0
17 Feb 2025
Using the Path of Least Resistance to Explain Deep Networks
Using the Path of Least Resistance to Explain Deep Networks
Sina Salek
Joseph Enguehard
FAtt
44
0
0
17 Feb 2025
Mechanistic Unveiling of Transformer Circuits: Self-Influence as a Key to Model Reasoning
Mechanistic Unveiling of Transformer Circuits: Self-Influence as a Key to Model Reasoning
Lefei Zhang
Lijie Hu
Di Wang
LRM
95
0
0
17 Feb 2025
Error-controlled non-additive interaction discovery in machine learning models
Error-controlled non-additive interaction discovery in machine learning models
Winston Chen
Yifan Jiang
William Stafford Noble
Yang Young Lu
50
1
0
17 Feb 2025
Narrowing Information Bottleneck Theory for Multimodal Image-Text Representations Interpretability
Narrowing Information Bottleneck Theory for Multimodal Image-Text Representations Interpretability
Zhiyu Zhu
Zhibo Jin
Jiayu Zhang
Nan Yang
Jiahao Huang
Jianlong Zhou
Fang Chen
44
0
0
16 Feb 2025
Generalized Attention Flow: Feature Attribution for Transformer Models via Maximum Flow
Generalized Attention Flow: Feature Attribution for Transformer Models via Maximum Flow
Behrooz Azarkhalili
Maxwell Libbrecht
39
0
0
14 Feb 2025
Applying Deep Learning to Ads Conversion Prediction in Last Mile Delivery Marketplace
Applying Deep Learning to Ads Conversion Prediction in Last Mile Delivery Marketplace
Di Li
Xiaochang Miao
Huiyu Song
Chao Chu
Hao Xu
Mandar Rahurkar
37
0
0
14 Feb 2025
Towards Transparent and Accurate Plasma State Monitoring at JET
Towards Transparent and Accurate Plasma State Monitoring at JET
Andrin Bürli
Alessandro Pau
Thomas Koller
Olivier Sauter
JET Contributors
55
1
0
14 Feb 2025
Recent Advances in Malware Detection: Graph Learning and Explainability
Recent Advances in Malware Detection: Graph Learning and Explainability
Hossein Shokouhinejad
Roozbeh Razavi-Far
Hesamodin Mohammadian
Mahdi Rabbani
Samuel Ansong
Griffin Higgins
Ali Ghorbani
AAML
76
2
0
14 Feb 2025
Brain-Inspired Exploration of Functional Networks and Key Neurons in Large Language Models
Brain-Inspired Exploration of Functional Networks and Key Neurons in Large Language Models
Yiheng Liu
Xiaohui Gao
Haiyang Sun
Bao Ge
Tianming Liu
Junwei Han
X. Hu
41
0
0
13 Feb 2025
DejAIvu: Identifying and Explaining AI Art on the Web in Real-Time with Saliency Maps
DejAIvu: Identifying and Explaining AI Art on the Web in Real-Time with Saliency Maps
Jocelyn Dzuong
96
0
0
12 Feb 2025
Survey on Recent Progress of AI for Chemistry: Methods, Applications, and Opportunities
Survey on Recent Progress of AI for Chemistry: Methods, Applications, and Opportunities
Ding Hu
Pengxiang Hua
Zhen Huang
88
0
0
09 Feb 2025
Universal Sparse Autoencoders: Interpretable Cross-Model Concept Alignment
Universal Sparse Autoencoders: Interpretable Cross-Model Concept Alignment
Harrish Thasarathan
Julian Forsyth
Thomas Fel
M. Kowal
Konstantinos G. Derpanis
111
7
0
06 Feb 2025
Deep Unfolding Multi-modal Image Fusion Network via Attribution Analysis
Deep Unfolding Multi-modal Image Fusion Network via Attribution Analysis
Haowen Bai
Zixiang Zhao
Jiangshe Zhang
Baisong Jiang
Lilun Deng
Yukun Cui
Shuang Xu
Chunxia Zhang
60
2
0
03 Feb 2025
Sparse Autoencoder Insights on Voice Embeddings
Sparse Autoencoder Insights on Voice Embeddings
Daniel Pluth
Yu Zhou
Vijay K. Gurbani
45
0
0
31 Jan 2025
CueTip: An Interactive and Explainable Physics-aware Pool Assistant
CueTip: An Interactive and Explainable Physics-aware Pool Assistant
Sean Memery
Kevin Denamganai
Jiaxin Zhang
Zehai Tu
Yiwen Guo
Kartic Subr
LRM
42
0
0
30 Jan 2025
Fake News Detection After LLM Laundering: Measurement and Explanation
Fake News Detection After LLM Laundering: Measurement and Explanation
Rupak Kumar Das
Jonathan Dodge
93
0
0
29 Jan 2025
Extending Information Bottleneck Attribution to Video Sequences
Extending Information Bottleneck Attribution to Video Sequences
Veronika Solopova
Lucas Schmidt
Dorothea Kolossa
47
0
0
28 Jan 2025
AI-Driven Predictive Analytics Approach for Early Prognosis of Chronic Kidney Disease Using Ensemble Learning and Explainable AI
AI-Driven Predictive Analytics Approach for Early Prognosis of Chronic Kidney Disease Using Ensemble Learning and Explainable AI
K. M. T. Jawad
Anusha Verma
Fathi H. Amsaad
Lamia Ashraf
70
0
0
28 Jan 2025
B-cosification: Transforming Deep Neural Networks to be Inherently Interpretable
B-cosification: Transforming Deep Neural Networks to be Inherently Interpretable
Shreyash Arya
Sukrut Rao
Moritz Bohle
Bernt Schiele
68
2
0
28 Jan 2025
Evaluating the Effectiveness of XAI Techniques for Encoder-Based Language Models
Melkamu Mersha
Mesay Gemeda Yigezu
Jugal Kalita
ELM
51
3
0
26 Jan 2025
Understanding and Mitigating Gender Bias in LLMs via Interpretable Neuron Editing
Understanding and Mitigating Gender Bias in LLMs via Interpretable Neuron Editing
Zeping Yu
Sophia Ananiadou
KELM
43
1
0
24 Jan 2025
Surrogate Modeling for Explainable Predictive Time Series Corrections
Surrogate Modeling for Explainable Predictive Time Series Corrections
Alfredo Lopez
Florian Sobieczky
AI4TS
45
0
0
17 Jan 2025
Large Language Models Share Representations of Latent Grammatical Concepts Across Typologically Diverse Languages
Large Language Models Share Representations of Latent Grammatical Concepts Across Typologically Diverse Languages
Jannik Brinkmann
Chris Wendler
Christian Bartelt
Aaron Mueller
51
9
0
10 Jan 2025
COMIX: Compositional Explanations using Prototypes
COMIX: Compositional Explanations using Prototypes
S. Sivaprasad
D. Kangin
Plamen Angelov
Mario Fritz
169
0
0
10 Jan 2025
Previous
123456...555657
Next