ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1703.01365
  4. Cited By
Axiomatic Attribution for Deep Networks

Axiomatic Attribution for Deep Networks

4 March 2017
Mukund Sundararajan
Ankur Taly
Qiqi Yan
    OOD
    FAtt
ArXivPDFHTML

Papers citing "Axiomatic Attribution for Deep Networks"

50 / 2,826 papers shown
Title
Explanation sensitivity to the randomness of large language models: the
  case of journalistic text classification
Explanation sensitivity to the randomness of large language models: the case of journalistic text classification
Jérémie Bogaert
Marie-Catherine de Marneffe
Antonin Descampe
Louis Escouflaire
Cedrick Fairon
François-Xavier Standaert
24
1
0
07 Oct 2024
Evaluating the Correctness of Inference Patterns Used by LLMs for Judgment
Evaluating the Correctness of Inference Patterns Used by LLMs for Judgment
Lu Chen
Yuxuan Huang
Yixing Li
Dongrui Liu
Qihan Ren
Shuai Zhao
Kun Kuang
Zilong Zheng
Quanshi Zhang
36
1
0
06 Oct 2024
Riemann Sum Optimization for Accurate Integrated Gradients Computation
Riemann Sum Optimization for Accurate Integrated Gradients Computation
Swadesh Swain
Shree Singhi
28
0
0
05 Oct 2024
Understanding the Effect of Algorithm Transparency of Model Explanations
  in Text-to-SQL Semantic Parsing
Understanding the Effect of Algorithm Transparency of Model Explanations in Text-to-SQL Semantic Parsing
Daking Rai
Rydia R. Weiland
Kayla Margaret Gabriella Herrera
Tyler H. Shaw
Ziyu Yao
41
1
0
05 Oct 2024
Disentangling Textual and Acoustic Features of Neural Speech
  Representations
Disentangling Textual and Acoustic Features of Neural Speech Representations
Hosein Mohebbi
Grzegorz Chrupała
Willem H. Zuidema
A. Alishahi
Ivan Titov
CoGe
31
0
0
03 Oct 2024
F-Fidelity: A Robust Framework for Faithfulness Evaluation of Explainable AI
F-Fidelity: A Robust Framework for Faithfulness Evaluation of Explainable AI
Xu Zheng
Farhad Shirani
Zhuomin Chen
Chaohao Lin
Wei Cheng
Wenbo Guo
Dongsheng Luo
AAML
38
0
0
03 Oct 2024
Explainable Earth Surface Forecasting under Extreme Events
Explainable Earth Surface Forecasting under Extreme Events
Oscar J. Pellicer-Valero
Miguel-Ángel Fernández-Torres
Chaonan Ji
Miguel D. Mahecha
Gustau Camps-Valls
23
0
0
02 Oct 2024
Learning-Augmented Robust Algorithmic Recourse
Learning-Augmented Robust Algorithmic Recourse
Kshitij Kayastha
Vasilis Gkatzelis
Shahin Jabbari
39
0
0
02 Oct 2024
One Wave to Explain Them All: A Unifying Perspective on Post-hoc
  Explainability
One Wave to Explain Them All: A Unifying Perspective on Post-hoc Explainability
Gabriel Kasmi
Amandine Brunetto
Thomas Fel
Jayneel Parekh
AAML
FAtt
35
0
0
02 Oct 2024
Mitigating Copy Bias in In-Context Learning through Neuron Pruning
Mitigating Copy Bias in In-Context Learning through Neuron Pruning
Ameen Ali
Lior Wolf
Ivan Titov
46
2
0
02 Oct 2024
"No Matter What You Do": Purifying GNN Models via Backdoor Unlearning
"No Matter What You Do": Purifying GNN Models via Backdoor Unlearning
Jiale Zhang
Chengcheng Zhu
Bosen Rao
Hao Sui
Xiaobing Sun
Bing Chen
Chunyi Zhou
Shouling Ji
AAML
40
0
0
02 Oct 2024
Tackling the Accuracy-Interpretability Trade-off in a Hierarchy of Machine Learning Models for the Prediction of Extreme Heatwaves
Tackling the Accuracy-Interpretability Trade-off in a Hierarchy of Machine Learning Models for the Prediction of Extreme Heatwaves
Alessandro Lovo
Amaury Lancelin
Corentin Herbert
Freddy Bouchet
AI4CE
32
0
0
01 Oct 2024
Best Practices for Responsible Machine Learning in Credit Scoring
Best Practices for Responsible Machine Learning in Credit Scoring
Giovani Valdrighi
Athyrson M. Ribeiro
Jansen S. B. Pereira
Vitoria Guardieiro
Arthur Hendricks
...
Juan David Nieto Garcia
Felipe F. Bocca
Thalita B. Veronese
Lucas Wanner
Marcos Medeiros Raimundo
FaML
37
0
0
30 Sep 2024
Sufficient and Necessary Explanations (and What Lies in Between)
Sufficient and Necessary Explanations (and What Lies in Between)
Beepul Bharti
P. Yi
Jeremias Sulam
XAI
FAtt
35
1
0
30 Sep 2024
Restore Anything with Masks: Leveraging Mask Image Modeling for Blind
  All-in-One Image Restoration
Restore Anything with Masks: Leveraging Mask Image Modeling for Blind All-in-One Image Restoration
Chu-Jie Qin
Rui-Qi Wu
Zikun Liu
Xin Lin
Chun-Le Guo
Hyun Hee Park
Chongyi Li
31
6
0
28 Sep 2024
Faithfulness and the Notion of Adversarial Sensitivity in NLP
  Explanations
Faithfulness and the Notion of Adversarial Sensitivity in NLP Explanations
Supriya Manna
Niladri Sett
AAML
31
2
0
26 Sep 2024
Recent advances in interpretable machine learning using structure-based
  protein representations
Recent advances in interpretable machine learning using structure-based protein representations
L. Vecchietti
Minji Lee
Begench Hangeldiyev
Hyunkyu Jung
Hahnbeom Park
Tae-Kyun Kim
Meeyoung Cha
Ho Min Kim
AI4CE
45
1
0
26 Sep 2024
The Overfocusing Bias of Convolutional Neural Networks: A
  Saliency-Guided Regularization Approach
The Overfocusing Bias of Convolutional Neural Networks: A Saliency-Guided Regularization Approach
David Bertoin
Eduardo Hugo Sanchez
Mehdi Zouitine
Emmanuel Rachelson
45
0
0
25 Sep 2024
Towards User-Focused Research in Training Data Attribution for
  Human-Centered Explainable AI
Towards User-Focused Research in Training Data Attribution for Human-Centered Explainable AI
Elisa Nguyen
Johannes Bertram
Evgenii Kortukov
Jean Y. Song
Seong Joon Oh
TDI
385
2
0
25 Sep 2024
Enhancing Feature Selection and Interpretability in AI Regression Tasks
  Through Feature Attribution
Enhancing Feature Selection and Interpretability in AI Regression Tasks Through Feature Attribution
Alexander Hinterleitner
T. Bartz-Beielstein
Richard Schulz
Sebastian Spengler
Thomas Winter
Christoph Leitenmeier
34
1
0
25 Sep 2024
Leveraging Local Structure for Improving Model Explanations: An
  Information Propagation Approach
Leveraging Local Structure for Improving Model Explanations: An Information Propagation Approach
Ruo Yang
Binghui Wang
M. Bilgic
FAtt
21
0
0
24 Sep 2024
Creating Healthy Friction: Determining Stakeholder Requirements of Job
  Recommendation Explanations
Creating Healthy Friction: Determining Stakeholder Requirements of Job Recommendation Explanations
Roan Schellingerhout
Francesco Barile
Nava Tintarev
34
1
0
24 Sep 2024
Facing Asymmetry -- Uncovering the Causal Link between Facial Symmetry
  and Expression Classifiers using Synthetic Interventions
Facing Asymmetry -- Uncovering the Causal Link between Facial Symmetry and Expression Classifiers using Synthetic Interventions
Tim Buchner
Niklas Penzel
Orlando Guntinas-Lichius
Joachim Denzler
CVBM
43
2
0
24 Sep 2024
GATher: Graph Attention Based Predictions of Gene-Disease Links
GATher: Graph Attention Based Predictions of Gene-Disease Links
David Narganes-Carlon
Anniek Myatt
Mani Mudaliar
Daniel J. Crowther
42
0
0
23 Sep 2024
VLM's Eye Examination: Instruct and Inspect Visual Competency of Vision
  Language Models
VLM's Eye Examination: Instruct and Inspect Visual Competency of Vision Language Models
Nam Hyeon-Woo
Moon Ye-Bin
Wonseok Choi
Lee Hyun
Tae-Hyun Oh
CoGe
28
3
0
23 Sep 2024
Explainable AI needs formal notions of explanation correctness
Explainable AI needs formal notions of explanation correctness
Stefan Haufe
Rick Wilming
Benedict Clark
Rustam Zhumagambetov
Danny Panknin
Ahcène Boubekki
XAI
35
1
0
22 Sep 2024
A is for Absorption: Studying Feature Splitting and Absorption in Sparse
  Autoencoders
A is for Absorption: Studying Feature Splitting and Absorption in Sparse Autoencoders
David Chanin
James Wilken-Smith
Tomáš Dulka
Hardik Bhatnagar
Joseph Bloom
23
21
0
22 Sep 2024
Interpreting Arithmetic Mechanism in Large Language Models through
  Comparative Neuron Analysis
Interpreting Arithmetic Mechanism in Large Language Models through Comparative Neuron Analysis
Zeping Yu
Sophia Ananiadou
LRM
MILM
32
7
0
21 Sep 2024
The FIX Benchmark: Extracting Features Interpretable to eXperts
The FIX Benchmark: Extracting Features Interpretable to eXperts
Helen Jin
Shreya Havaldar
Chaehyeon Kim
Anton Xue
Weiqiu You
...
Bhuvnesh Jain
Amin Madani
M. Sako
Lyle Ungar
Eric Wong
31
1
0
20 Sep 2024
Efficient Knowledge Distillation: Empowering Small Language Models with
  Teacher Model Insights
Efficient Knowledge Distillation: Empowering Small Language Models with Teacher Model Insights
Mohamad Ballout
U. Krumnack
Gunther Heidemann
Kai-Uwe Kühnberger
35
2
0
19 Sep 2024
Measuring Sound Symbolism in Audio-visual Models
Measuring Sound Symbolism in Audio-visual Models
Wei-Cheng Tseng
Yi-Jen Shih
David Harwath
Raymond Mooney
39
0
0
18 Sep 2024
Additive-feature-attribution methods: a review on explainable artificial
  intelligence for fluid dynamics and heat transfer
Additive-feature-attribution methods: a review on explainable artificial intelligence for fluid dynamics and heat transfer
Andres Cremades
S. Hoyas
Ricardo Vinuesa
FAtt
31
9
0
18 Sep 2024
Gradient-free Post-hoc Explainability Using Distillation Aided Learnable
  Approach
Gradient-free Post-hoc Explainability Using Distillation Aided Learnable Approach
Debarpan Bhattacharya
A. H. Poorjam
Deepak Mittal
Sriram Ganapathy
32
0
0
17 Sep 2024
Trustworthy Conceptual Explanations for Neural Networks in Robot
  Decision-Making
Trustworthy Conceptual Explanations for Neural Networks in Robot Decision-Making
Som Sagar
Aditya Taparia
Harsh Mankodiya
Pranav M Bidare
Yifan Zhou
Ransalu Senanayake
FAtt
34
0
0
16 Sep 2024
Trustworthiness in Retrieval-Augmented Generation Systems: A Survey
Trustworthiness in Retrieval-Augmented Generation Systems: A Survey
Yujia Zhou
Yan Liu
Xiaoxi Li
Jiajie Jin
Hongjin Qian
Zheng Liu
Chaozhuo Li
Zhicheng Dou
Tsung-Yi Ho
Philip S. Yu
3DV
RALM
60
28
0
16 Sep 2024
Optimal ablation for interpretability
Optimal ablation for interpretability
Maximilian Li
Lucas Janson
FAtt
51
2
0
16 Sep 2024
InfoDisent: Explainability of Image Classification Models by Information Disentanglement
InfoDisent: Explainability of Image Classification Models by Information Disentanglement
Łukasz Struski
Dawid Rymarczyk
Jacek Tabor
59
1
0
16 Sep 2024
Integrated Multi-Level Knowledge Distillation for Enhanced Speaker
  Verification
Integrated Multi-Level Knowledge Distillation for Enhanced Speaker Verification
Wenhao Yang
Jianguo Wei
Wenhuan Lu
Xugang Lu
Lei Li
33
0
0
14 Sep 2024
XSub: Explanation-Driven Adversarial Attack against Blackbox Classifiers
  via Feature Substitution
XSub: Explanation-Driven Adversarial Attack against Blackbox Classifiers via Feature Substitution
Kiana Vu
Phung Lai
Truc D. T. Nguyen
AAML
36
0
0
13 Sep 2024
LMAC-TD: Producing Time Domain Explanations for Audio Classifiers
LMAC-TD: Producing Time Domain Explanations for Audio Classifiers
Eleonora Mancini
Francesco Paissan
Mirco Ravanelli
Cem Subakan
31
1
0
13 Sep 2024
Y-Drop: A Conductance based Dropout for fully connected layers
Y-Drop: A Conductance based Dropout for fully connected layers
Efthymios Georgiou
Georgios Paraskevopoulos
Alexandros Potamianos
13
0
0
11 Sep 2024
ELMS: Elasticized Large Language Models On Mobile Devices
ELMS: Elasticized Large Language Models On Mobile Devices
Wangsong Yin
Rongjie Yi
Daliang Xu
Gang Huang
Mengwei Xu
Xuanzhe Liu
37
5
0
08 Sep 2024
You can remove GPT2's LayerNorm by fine-tuning
You can remove GPT2's LayerNorm by fine-tuning
Stefan Heimersheim
AI4CE
19
3
0
06 Sep 2024
Visualizing Spatial Semantics of Dimensionally Reduced Text Embeddings
Visualizing Spatial Semantics of Dimensionally Reduced Text Embeddings
Wei Liu
Chris North
Rebecca Faust
33
0
0
06 Sep 2024
Decompose the model: Mechanistic interpretability in image models with
  Generalized Integrated Gradients (GIG)
Decompose the model: Mechanistic interpretability in image models with Generalized Integrated Gradients (GIG)
Yearim Kim
Sangyu Han
Sangbum Han
Nojun Kwak
60
0
0
03 Sep 2024
Explanation Space: A New Perspective into Time Series Interpretability
Explanation Space: A New Perspective into Time Series Interpretability
Shahbaz Rezaei
Xin Liu
AI4TS
34
1
0
02 Sep 2024
Towards Symbolic XAI -- Explanation Through Human Understandable Logical
  Relationships Between Features
Towards Symbolic XAI -- Explanation Through Human Understandable Logical Relationships Between Features
Thomas Schnake
Farnoush Rezaei Jafaria
Jonas Lederer
Ping Xiong
Shinichi Nakajima
Stefan Gugler
G. Montavon
Klaus-Robert Müller
48
4
0
30 Aug 2024
Explainable Artificial Intelligence: A Survey of Needs, Techniques, Applications, and Future Direction
Explainable Artificial Intelligence: A Survey of Needs, Techniques, Applications, and Future Direction
Melkamu Mersha
Khang Lam
Joseph Wood
Ali AlShami
Jugal Kalita
XAI
AI4TS
83
28
0
30 Aug 2024
IBO: Inpainting-Based Occlusion to Enhance Explainable Artificial
  Intelligence Evaluation in Histopathology
IBO: Inpainting-Based Occlusion to Enhance Explainable Artificial Intelligence Evaluation in Histopathology
Pardis Afshar
Sajjad Hashembeiki
Pouya Khani
Emad Fatemizadeh
M. Rohban
34
4
0
29 Aug 2024
ClimDetect: A Benchmark Dataset for Climate Change Detection and Attribution
ClimDetect: A Benchmark Dataset for Climate Change Detection and Attribution
Sungduk Yu
Brian L. White
Anahita Bhiwandiwalla
Musashi Hinck
M. L. Olson
Tung Nguyen
Vasudev Lal
Tung Nguyen
Vasudev Lal
42
0
0
28 Aug 2024
Previous
123...567...555657
Next