ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1703.01365
  4. Cited By
Axiomatic Attribution for Deep Networks
v1v2 (latest)

Axiomatic Attribution for Deep Networks

4 March 2017
Mukund Sundararajan
Ankur Taly
Qiqi Yan
    OODFAtt
ArXiv (abs)PDFHTML

Papers citing "Axiomatic Attribution for Deep Networks"

50 / 2,871 papers shown
Title
Enhancing Neural Network Interpretability Through Conductance-Based
  Information Plane Analysis
Enhancing Neural Network Interpretability Through Conductance-Based Information Plane Analysis
J. Dabounou
Amine Baazzouz
FAtt
51
0
0
26 Aug 2024
Why Antiwork: A RoBERTa-Based System for Work-Related Stress
  Identification and Leading Factor Analysis
Why Antiwork: A RoBERTa-Based System for Work-Related Stress Identification and Leading Factor Analysis
Tao Lu
Muzhe Wu
Xinyi Lu
Siyuan Xu
Shuyu Zhan
Anuj Tambwekar
Emily Mower Provost
25
0
0
24 Aug 2024
Perturbation on Feature Coalition: Towards Interpretable Deep Neural
  Networks
Perturbation on Feature Coalition: Towards Interpretable Deep Neural Networks
Xuran Hu
Mingzhe Zhu
Zhenpeng Feng
Miloš Daković
Ljubiša Stanković
88
0
0
23 Aug 2024
Enhancing Transferability of Adversarial Attacks with GE-AdvGAN+: A
  Comprehensive Framework for Gradient Editing
Enhancing Transferability of Adversarial Attacks with GE-AdvGAN+: A Comprehensive Framework for Gradient Editing
Zhibo Jin
Jiayu Zhang
Zhiyu Zhu
Chenyu Zhang
Jiahao Huang
Jianlong Zhou
Fang Chen
AAML
109
0
0
22 Aug 2024
Pruning By Explaining Revisited: Optimizing Attribution Methods to Prune
  CNNs and Transformers
Pruning By Explaining Revisited: Optimizing Attribution Methods to Prune CNNs and Transformers
Sayed Mohammad Vakilzadeh Hatefi
Maximilian Dreyer
Reduan Achtibat
Thomas Wiegand
Wojciech Samek
Sebastian Lapuschkin
ViT
68
4
0
22 Aug 2024
Counterfactuals As a Means for Evaluating Faithfulness of Attribution Methods in Autoregressive Language Models
Counterfactuals As a Means for Evaluating Faithfulness of Attribution Methods in Autoregressive Language Models
Sepehr Kamahi
Yadollah Yaghoobzadeh
153
0
0
21 Aug 2024
Attribution Analysis Meets Model Editing: Advancing Knowledge Correction in Vision Language Models with VisEdit
Attribution Analysis Meets Model Editing: Advancing Knowledge Correction in Vision Language Models with VisEdit
Qizhou Chen
Taolin Zhang
Chengyu Wang
Xiaofeng He
Dakan Wang
Tingting Liu
KELM
174
4
0
19 Aug 2024
LCE: A Framework for Explainability of DNNs for Ultrasound Image Based
  on Concept Discovery
LCE: A Framework for Explainability of DNNs for Ultrasound Image Based on Concept Discovery
Weiji Kong
Xun Gong
Juan Wang
33
1
0
19 Aug 2024
Normalized AOPC: Fixing Misleading Faithfulness Metrics for Feature Attribution Explainability
Normalized AOPC: Fixing Misleading Faithfulness Metrics for Feature Attribution Explainability
Joakim Edin
Andreas Geert Motzfeldt
Casper L. Christensen
Tuukka Ruotsalo
Lars Maaløe
Maria Maistro
132
4
0
15 Aug 2024
Enhancing Model Interpretability with Local Attribution over Global
  Exploration
Enhancing Model Interpretability with Local Attribution over Global Exploration
Zhiyu Zhu
Zhibo Jin
Jiayu Zhang
Huaming Chen
FAtt
53
4
0
14 Aug 2024
AdTEC: A Unified Benchmark for Evaluating Text Quality in Search Engine Advertising
AdTEC: A Unified Benchmark for Evaluating Text Quality in Search Engine Advertising
Peinan Zhang
Yusuke Sakai
Masato Mita
Hiroki Ouchi
Taro Watanabe
104
1
0
12 Aug 2024
RISE-iEEG: Robust to Inter-Subject Electrodes Implantation Variability iEEG Classifier
RISE-iEEG: Robust to Inter-Subject Electrodes Implantation Variability iEEG Classifier
Maryam Ostadsharif Memar
Navid Ziaei
Behzad Nazari
Ali Yousefi
104
0
0
12 Aug 2024
Improving Network Interpretability via Explanation Consistency
  Evaluation
Improving Network Interpretability via Explanation Consistency Evaluation
Hefeng Wu
Hao Jiang
Keze Wang
Ziyi Tang
Xianghuan He
Liang Lin
FAttAAML
95
0
0
08 Aug 2024
SCENE: Evaluating Explainable AI Techniques Using Soft Counterfactuals
SCENE: Evaluating Explainable AI Techniques Using Soft Counterfactuals
Haoran Zheng
Utku Pamuksuz
100
0
0
08 Aug 2024
Hard to Explain: On the Computational Hardness of In-Distribution Model
  Interpretation
Hard to Explain: On the Computational Hardness of In-Distribution Model Interpretation
Guy Amir
Shahaf Bassan
Guy Katz
81
3
0
07 Aug 2024
MicroXercise: A Micro-Level Comparative and Explainable System for
  Remote Physical Therapy
MicroXercise: A Micro-Level Comparative and Explainable System for Remote Physical Therapy
Hanchen David Wang
Nibraas Khan
Anna Chen
Nilanjan Sarkar
Pamela Wisniewski
Meiyi Ma
65
0
0
06 Aug 2024
Unveiling Factual Recall Behaviors of Large Language Models through
  Knowledge Neurons
Unveiling Factual Recall Behaviors of Large Language Models through Knowledge Neurons
Yifei Wang
Yuheng Chen
Wanting Wen
Yu Sheng
Linjing Li
D. Zeng
KELM
103
9
0
06 Aug 2024
Backward Compatibility in Attributive Explanation and Enhanced Model
  Training Method
Backward Compatibility in Attributive Explanation and Enhanced Model Training Method
Ryuta Matsuno
91
0
0
05 Aug 2024
Explain via Any Concept: Concept Bottleneck Model with Open Vocabulary
  Concepts
Explain via Any Concept: Concept Bottleneck Model with Open Vocabulary Concepts
Andong Tan
Fengtao Zhou
Hao Chen
VLM
78
5
0
05 Aug 2024
The Quest for the Right Mediator: A History, Survey, and Theoretical
  Grounding of Causal Interpretability
The Quest for the Right Mediator: A History, Survey, and Theoretical Grounding of Causal Interpretability
Aaron Mueller
Jannik Brinkmann
Millicent Li
Samuel Marks
Koyena Pal
...
Arnab Sen Sharma
Jiuding Sun
Eric Todd
David Bau
Yonatan Belinkov
CML
132
25
0
02 Aug 2024
META-ANOVA: Screening interactions for interpretable machine learning
META-ANOVA: Screening interactions for interpretable machine learning
Daniel A. Serino
Marc L. Klasky
Chanmoo Park
Dongha Kim
Yongdai Kim
74
0
0
02 Aug 2024
Interpreting Global Perturbation Robustness of Image Models using Axiomatic Spectral Importance Decomposition
Interpreting Global Perturbation Robustness of Image Models using Axiomatic Spectral Importance Decomposition
Róisín Luo
James McDermott
C. O'Riordan
AAML
56
1
0
02 Aug 2024
xAI-Drop: Don't Use What You Cannot Explain
xAI-Drop: Don't Use What You Cannot Explain
Vincenzo Marco De Luca
Antonio Longa
Andrea Passerini
Pietro Lio
116
0
0
29 Jul 2024
MaskInversion: Localized Embeddings via Optimization of Explainability
  Maps
MaskInversion: Localized Embeddings via Optimization of Explainability Maps
Walid Bousselham
Sofian Chaybouti
Christian Rupprecht
Vittorio Ferrari
Hilde Kuehne
119
0
0
29 Jul 2024
BEExAI: Benchmark to Evaluate Explainable AI
BEExAI: Benchmark to Evaluate Explainable AI
Samuel Sithakoul
Sara Meftah
Clément Feutry
92
10
0
29 Jul 2024
Revisiting the robustness of post-hoc interpretability methods
Revisiting the robustness of post-hoc interpretability methods
Jiawen Wei
Hugues Turbé
G. Mengaldo
AAML
118
4
0
29 Jul 2024
Interpreting Low-level Vision Models with Causal Effect Maps
Interpreting Low-level Vision Models with Causal Effect Maps
Jinfan Hu
Jinjin Gu
Shiyao Yu
Fanghua Yu
Zheyuan Li
Zhiyuan You
Chaochao Lu
Chao Dong
CML
228
3
0
29 Jul 2024
On the Evaluation Consistency of Attribution-based Explanations
On the Evaluation Consistency of Attribution-based Explanations
Jiarui Duan
Haoling Li
Haofei Zhang
Hao Jiang
Mengqi Xue
Li Sun
Mingli Song
Mingli Song
XAI
86
1
0
28 Jul 2024
CoLiDR: Concept Learning using Aggregated Disentangled Representations
CoLiDR: Concept Learning using Aggregated Disentangled Representations
Sanchit Sinha
Guangzhi Xiong
Aidong Zhang
102
3
0
27 Jul 2024
Efficiently improving key weather variables forecasting by performing
  the guided iterative prediction in latent space
Efficiently improving key weather variables forecasting by performing the guided iterative prediction in latent space
Shuangliang Li
Siwei Li
68
0
0
27 Jul 2024
Practical Attribution Guidance for Rashomon Sets
Practical Attribution Guidance for Rashomon Sets
Sichao Li
Amanda S. Barnard
Quanling Deng
73
5
0
26 Jul 2024
Interpreting artificial neural networks to detect genome-wide association signals for complex traits
Interpreting artificial neural networks to detect genome-wide association signals for complex traits
Burak Yelmen
Maris Alver
Estonian Biobank Research Team
Flora Jay
L. Milani
Lili Milani
125
0
0
26 Jul 2024
Automated Ensemble Multimodal Machine Learning for Healthcare
Automated Ensemble Multimodal Machine Learning for Healthcare
F. Imrie
Stefan Denner
Lucas S. Brunschwig
Klaus H. Maier-Hein
M. Schaar
58
3
1
25 Jul 2024
Exploring the Plausibility of Hate and Counter Speech Detectors with
  Explainable AI
Exploring the Plausibility of Hate and Counter Speech Detectors with Explainable AI
Adrian Jaques Böck
D. Slijepcevic
Matthias Zeppelzauer
72
0
0
25 Jul 2024
Aggregated Attributions for Explanatory Analysis of 3D Segmentation
  Models
Aggregated Attributions for Explanatory Analysis of 3D Segmentation Models
Maciej Chrabaszcz
Hubert Baniecki
Piotr Komorowski
Szymon Płotka
Przemysław Biecek
65
2
0
23 Jul 2024
Multimodal Unlearnable Examples: Protecting Data against Multimodal
  Contrastive Learning
Multimodal Unlearnable Examples: Protecting Data against Multimodal Contrastive Learning
Xinwei Liu
Xiaojun Jia
Yuan Xun
Siyuan Liang
Xiaochun Cao
98
8
0
23 Jul 2024
Algebraic Adversarial Attacks on Integrated Gradients
Algebraic Adversarial Attacks on Integrated Gradients
Lachlan Simpson
Federico Costanza
Kyle Millar
A. Cheng
Cheng-Chew Lim
Hong-Gunn Chew
SILMAAML
154
2
0
23 Jul 2024
Explanation Regularisation through the Lens of Attributions
Explanation Regularisation through the Lens of Attributions
Pedro Ferreira
Wilker Aziz
Ivan Titov
152
1
0
23 Jul 2024
A Survey of Explainable Artificial Intelligence (XAI) in Financial Time
  Series Forecasting
A Survey of Explainable Artificial Intelligence (XAI) in Financial Time Series Forecasting
Pierre-Daniel Arsenault
Shengrui Wang
Jean-Marc Patenande
XAIAI4TS
118
3
0
22 Jul 2024
The Rlign Algorithm for Enhanced Electrocardiogram Analysis through
  R-Peak Alignment for Explainable Classification and Clustering
The Rlign Algorithm for Enhanced Electrocardiogram Analysis through R-Peak Alignment for Explainable Classification and Clustering
Lucas Plagwitz
Lucas Bickmann
Michael Fujarski
Alexander Brenner
Warnes Gobalakrishnan
Lars Eckardt
Antonius Büscher
Julian Varghese
46
3
0
22 Jul 2024
An Explainable Fast Deep Neural Network for Emotion Recognition
An Explainable Fast Deep Neural Network for Emotion Recognition
Francesco Di Luzio
A. Rosato
Massimo Panella
CVBM
46
1
0
20 Jul 2024
Evaluating the Reliability of Self-Explanations in Large Language Models
Evaluating the Reliability of Self-Explanations in Large Language Models
Korbinian Randl
John Pavlopoulos
Aron Henriksson
Tony Lindgren
LRM
128
1
0
19 Jul 2024
Investigating the Indirect Object Identification circuit in Mamba
Investigating the Indirect Object Identification circuit in Mamba
Danielle Ensign
Adrià Garriga-Alonso
Mamba
76
0
0
19 Jul 2024
DITTO: A Visual Digital Twin for Interventions and Temporal Treatment
  Outcomes in Head and Neck Cancer
DITTO: A Visual Digital Twin for Interventions and Temporal Treatment Outcomes in Head and Neck Cancer
A. Wentzel
Serageldin Attia
Xinhua Zhang
G. Canahuate
Clifton Fuller
G. Marai
84
5
0
18 Jul 2024
Validating Mechanistic Interpretations: An Axiomatic Approach
Validating Mechanistic Interpretations: An Axiomatic Approach
Nils Palumbo
Ravi Mangal
Zifan Wang
Saranya Vijayakumar
Corina S. Pasareanu
Somesh Jha
106
1
0
18 Jul 2024
NNsight and NDIF: Democratizing Access to Open-Weight Foundation Model Internals
NNsight and NDIF: Democratizing Access to Open-Weight Foundation Model Internals
Jaden Fiotto-Kaufman
Alexander R. Loftus
Eric Todd
Jannik Brinkmann
Caden Juang
...
Carla Brodley
Arjun Guha
Jonathan Bell
Byron C. Wallace
David Bau
105
5
0
18 Jul 2024
Geometric Remove-and-Retrain (GOAR): Coordinate-Invariant eXplainable AI
  Assessment
Geometric Remove-and-Retrain (GOAR): Coordinate-Invariant eXplainable AI Assessment
Yong-Hyun Park
Junghoon Seo
Bomseok Park
Seongsu Lee
Junghyo Jo
AAML
76
0
0
17 Jul 2024
Are Linear Regression Models White Box and Interpretable?
Are Linear Regression Models White Box and Interpretable?
A. M. Salih
Yuhe Wang
XAI
71
2
0
16 Jul 2024
Benchmarking the Attribution Quality of Vision Models
Benchmarking the Attribution Quality of Vision Models
Robin Hesse
Simone Schaub-Meyer
Stefan Roth
FAtt
91
3
0
16 Jul 2024
LLM Circuit Analyses Are Consistent Across Training and Scale
LLM Circuit Analyses Are Consistent Across Training and Scale
Curt Tigges
Michael Hanna
Qinan Yu
Stella Biderman
105
18
0
15 Jul 2024
Previous
123...789...565758
Next