ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1703.01365
  4. Cited By
Axiomatic Attribution for Deep Networks
v1v2 (latest)

Axiomatic Attribution for Deep Networks

4 March 2017
Mukund Sundararajan
Ankur Taly
Qiqi Yan
    OODFAtt
ArXiv (abs)PDFHTML

Papers citing "Axiomatic Attribution for Deep Networks"

50 / 2,871 papers shown
Title
Improving Interpretation Faithfulness for Vision Transformers
Improving Interpretation Faithfulness for Vision Transformers
Lijie Hu
Yixin Liu
Ninghao Liu
Mengdi Huai
Lichao Sun
Di Wang
89
9
0
29 Nov 2023
Interpreting Differentiable Latent States for Healthcare Time-series
  Data
Interpreting Differentiable Latent States for Healthcare Time-series Data
Yu Chen
Nivedita Bijlani
Samaneh Kouchaki
Payam Barnaghi
FAtt
46
0
0
29 Nov 2023
Elucidating Discrepancy in Explanations of Predictive Models Developed
  using EMR
Elucidating Discrepancy in Explanations of Predictive Models Developed using EMR
A. Brankovic
Wenjie Huang
David Cook
Sankalp Khanna
K. Bialkowski
16
3
0
28 Nov 2023
Influence Scores at Scale for Efficient Language Data Sampling
Influence Scores at Scale for Efficient Language Data Sampling
Nikhil Anand
Joshua Tan
Maria Minakova
TDI
102
3
0
27 Nov 2023
Knowledge Unlearning for LLMs: Tasks, Methods, and Challenges
Knowledge Unlearning for LLMs: Tasks, Methods, and Challenges
Nianwen Si
Hao Zhang
Heyu Chang
Wenlin Zhang
Dan Qu
Weiqiang Zhang
KELMMU
162
33
0
27 Nov 2023
GLIME: General, Stable and Local LIME Explanation
GLIME: General, Stable and Local LIME Explanation
Zeren Tan
Yang Tian
Jian Li
FAttLRM
83
20
0
27 Nov 2023
Injecting linguistic knowledge into BERT for Dialogue State Tracking
Injecting linguistic knowledge into BERT for Dialogue State Tracking
Xiaohan Feng
Xixin Wu
Helen M. Meng
88
0
0
27 Nov 2023
Having Second Thoughts? Let's hear it
Having Second Thoughts? Let's hear it
J. H. Lee
Sujith Vijayan
AAML
37
0
0
26 Nov 2023
Occlusion Sensitivity Analysis with Augmentation Subspace Perturbation
  in Deep Feature Space
Occlusion Sensitivity Analysis with Augmentation Subspace Perturbation in Deep Feature Space
Pedro Valois
Koichiro Niinuma
Kazuhiro Fukui
AAML
78
5
0
25 Nov 2023
CT-xCOV: a CT-scan based Explainable Framework for COVid-19 diagnosis
CT-xCOV: a CT-scan based Explainable Framework for COVid-19 diagnosis
Ismail Elbouknify
A. Bouhoute
Khalid Fardousse
Ismail Berrada
Abdelmajid Badri
61
1
0
24 Nov 2023
Robust and Interpretable COVID-19 Diagnosis on Chest X-ray Images using
  Adversarial Training
Robust and Interpretable COVID-19 Diagnosis on Chest X-ray Images using Adversarial Training
Karina Yang
Alexis Bennett
Dominique Duncan
OOD
71
1
0
23 Nov 2023
You Only Explain Once
You Only Explain Once
David A. Kelly
Hana Chockler
Daniel Kroening
Nathan Blake
Aditi Ramaswamy
Melane Navaratnarajah
Aaditya Shivakumar
94
2
0
23 Nov 2023
Understanding the Vulnerability of CLIP to Image Compression
Understanding the Vulnerability of CLIP to Image Compression
Cangxiong Chen
Vinay P. Namboodiri
Julian Padget
50
2
0
23 Nov 2023
Explaining high-dimensional text classifiers
Explaining high-dimensional text classifiers
Odelia Melamed
Rich Caruana
48
0
0
22 Nov 2023
Attribution and Alignment: Effects of Local Context Repetition on
  Utterance Production and Comprehension in Dialogue
Attribution and Alignment: Effects of Local Context Repetition on Utterance Production and Comprehension in Dialogue
Aron Molnar
Jaap Jumelet
Mario Giulianelli
Arabella J. Sinclair
70
2
0
21 Nov 2023
Neural Network Pruning by Gradient Descent
Neural Network Pruning by Gradient Descent
Zhang Zhang
Ruyi Tao
Jiang Zhang
58
4
0
21 Nov 2023
Interpretability is in the eye of the beholder: Human versus artificial
  classification of image segments generated by humans versus XAI
Interpretability is in the eye of the beholder: Human versus artificial classification of image segments generated by humans versus XAI
Romy Müller
Marius Thoss
Julian Ullrich
Steffen Seitz
Carsten Knoll
64
3
0
21 Nov 2023
Visual Analytics for Generative Transformer Models
Visual Analytics for Generative Transformer Models
Raymond Li
Ruixin Yang
Wen Xiao
Ahmed AbuRaed
Gabriel Murray
Giuseppe Carenini
70
2
0
21 Nov 2023
Deep Tensor Network
Deep Tensor Network
Yifan Zhang
116
0
0
18 Nov 2023
GAIA: Delving into Gradient-based Attribution Abnormality for
  Out-of-distribution Detection
GAIA: Delving into Gradient-based Attribution Abnormality for Out-of-distribution Detection
Jinggang Chen
Junjie Li
Xiaoyang Qu
Jianzong Wang
Jiguang Wan
Jing Xiao
OODD
67
10
0
16 Nov 2023
Controllable Text Summarization: Unraveling Challenges, Approaches, and
  Prospects -- A Survey
Controllable Text Summarization: Unraveling Challenges, Approaches, and Prospects -- A Survey
Ashok Urlana
Pruthwik Mishra
Tathagato Roy
Rahul Mishra
78
11
0
15 Nov 2023
Do Localization Methods Actually Localize Memorized Data in LLMs? A Tale
  of Two Benchmarks
Do Localization Methods Actually Localize Memorized Data in LLMs? A Tale of Two Benchmarks
Ting-Yun Chang
Jesse Thomason
Robin Jia
83
19
0
15 Nov 2023
Token Prediction as Implicit Classification to Identify LLM-Generated
  Text
Token Prediction as Implicit Classification to Identify LLM-Generated Text
Yutian Chen
Hao Kang
Vivian Zhai
Liangze Li
Rita Singh
Bhiksha Raj
DeLMO
54
26
0
15 Nov 2023
Finding AI-Generated Faces in the Wild
Finding AI-Generated Faces in the Wild
Gonzalo J. Aniano Porcile
Jack Gindi
Shivansh Mundra
J. Verbus
Hany Farid
CVBM
81
7
0
14 Nov 2023
The Disagreement Problem in Faithfulness Metrics
The Disagreement Problem in Faithfulness Metrics
Brian Barr
Noah Fatsi
Leif Hancox-Li
Peter Richter
Daniel Proano
Caleb Mok
77
4
0
13 Nov 2023
On Measuring Faithfulness or Self-consistency of Natural Language
  Explanations
On Measuring Faithfulness or Self-consistency of Natural Language Explanations
Letitia Parcalabescu
Anette Frank
LRM
125
29
0
13 Nov 2023
Optimising Human-AI Collaboration by Learning Convincing Explanations
Optimising Human-AI Collaboration by Learning Convincing Explanations
Alex J. Chan
Alihan Huyuk
M. Schaar
91
3
0
13 Nov 2023
Explaining black boxes with a SMILE: Statistical Model-agnostic
  Interpretability with Local Explanations
Explaining black boxes with a SMILE: Statistical Model-agnostic Interpretability with Local Explanations
Koorosh Aslansefat
Mojgan Hashemian
M. Walker
Mohammed Naveed Akram
Ioannis Sorokos
Y. Papadopoulos
FAttAAML
49
3
0
13 Nov 2023
AGRAMPLIFIER: Defending Federated Learning Against Poisoning Attacks
  Through Local Update Amplification
AGRAMPLIFIER: Defending Federated Learning Against Poisoning Attacks Through Local Update Amplification
Zirui Gong
Liyue Shen
Yanjun Zhang
Leo Yu Zhang
Jingwei Wang
Guangdong Bai
Yong Xiang
AAML
75
7
0
13 Nov 2023
Explainability of Vision Transformers: A Comprehensive Review and New
  Perspectives
Explainability of Vision Transformers: A Comprehensive Review and New Perspectives
Rojina Kashefi
Leili Barekatain
Mohammad Sabokrou
Fatemeh Aghaeipoor
ViT
109
11
0
12 Nov 2023
Greedy PIG: Adaptive Integrated Gradients
Greedy PIG: Adaptive Integrated Gradients
Kyriakos Axiotis
Sami Abu-El-Haija
Lin Chen
Matthew Fahrbach
Gang Fu
FAtt
67
0
0
10 Nov 2023
A Performance-Driven Benchmark for Feature Selection in Tabular Deep
  Learning
A Performance-Driven Benchmark for Feature Selection in Tabular Deep Learning
Valeriia Cherepanova
Roman Levin
Gowthami Somepalli
Jonas Geiping
C. Bayan Bruss
Andrew Gordon Wilson
Tom Goldstein
Micah Goldblum
64
20
0
10 Nov 2023
Generative Explanations for Graph Neural Network: Methods and
  Evaluations
Generative Explanations for Graph Neural Network: Methods and Evaluations
Jialin Chen
Kenza Amara
Junchi Yu
Rex Ying
76
4
0
09 Nov 2023
ABIGX: A Unified Framework for eXplainable Fault Detection and
  Classification
ABIGX: A Unified Framework for eXplainable Fault Detection and Classification
Yue Zhuo
Jinchuan Qian
Zhihuan Song
Zhiqiang Ge
39
1
0
09 Nov 2023
SCAAT: Improving Neural Network Interpretability via Saliency
  Constrained Adaptive Adversarial Training
SCAAT: Improving Neural Network Interpretability via Saliency Constrained Adaptive Adversarial Training
Rui Xu
Wenkang Qin
Peixiang Huang
Hao Wang
Lin Luo
FAttAAML
59
3
0
09 Nov 2023
DEMASQ: Unmasking the ChatGPT Wordsmith
DEMASQ: Unmasking the ChatGPT Wordsmith
Kavita Kumari
Alessandro Pegoraro
Hossein Fereidooni
Ahmad-Reza Sadeghi
DeLMO
54
5
0
08 Nov 2023
Be Careful When Evaluating Explanations Regarding Ground Truth
Be Careful When Evaluating Explanations Regarding Ground Truth
Hubert Baniecki
Maciej Chrabaszcz
Andreas Holzinger
Bastian Pfeifer
Anna Saranti
P. Biecek
FAttAAML
81
3
0
08 Nov 2023
The PetShop Dataset -- Finding Causes of Performance Issues across
  Microservices
The PetShop Dataset -- Finding Causes of Performance Issues across Microservices
Michaela Hardt
William Orchard
Patrick Blobaum
S. Kasiviswanathan
Elke Kirschbaum
AI4TS
61
2
0
08 Nov 2023
Massive Editing for Large Language Models via Meta Learning
Massive Editing for Large Language Models via Meta Learning
Chenmien Tan
Ge Zhang
Jie Fu
KELM
113
43
0
08 Nov 2023
Explainable AI for Earth Observation: Current Methods, Open Challenges,
  and Opportunities
Explainable AI for Earth Observation: Current Methods, Open Challenges, and Opportunities
G. Taşkın
E. Aptoula
Alp Ertürk
70
2
0
08 Nov 2023
Quantifying Uncertainty in Natural Language Explanations of Large
  Language Models
Quantifying Uncertainty in Natural Language Explanations of Large Language Models
Sree Harsha Tanneru
Chirag Agarwal
Himabindu Lakkaraju
LRM
68
15
0
06 Nov 2023
Assessing Fidelity in XAI post-hoc techniques: A Comparative Study with
  Ground Truth Explanations Datasets
Assessing Fidelity in XAI post-hoc techniques: A Comparative Study with Ground Truth Explanations Datasets
Miquel Miró-Nicolau
Antoni Jaume-i-Capó
Gabriel Moyà Alcover
XAI
104
11
0
03 Nov 2023
Proto-lm: A Prototypical Network-Based Framework for Built-in
  Interpretability in Large Language Models
Proto-lm: A Prototypical Network-Based Framework for Built-in Interpretability in Large Language Models
Sean Xie
Soroush Vosoughi
Saeed Hassanpour
122
4
0
03 Nov 2023
Fast Shapley Value Estimation: A Unified Approach
Fast Shapley Value Estimation: A Unified Approach
Borui Zhang
Baotong Tian
Wenzhao Zheng
Jie Zhou
Jiwen Lu
TDIFAtt
106
1
0
02 Nov 2023
SmoothHess: ReLU Network Feature Interactions via Stein's Lemma
SmoothHess: ReLU Network Feature Interactions via Stein's Lemma
Max Torop
A. Masoomi
Davin Hill
Kivanc Kose
Stratis Ioannidis
Jennifer Dy
128
5
0
01 Nov 2023
Transferability and explainability of deep learning emulators for
  regional climate model projections: Perspectives for future applications
Transferability and explainability of deep learning emulators for regional climate model projections: Perspectives for future applications
Jorge Baño-Medina
M. Iturbide
Jesús Fernández
J. M. Gutiérrez
44
9
0
01 Nov 2023
Medical Image Denosing via Explainable AI Feature Preserving Loss
Medical Image Denosing via Explainable AI Feature Preserving Loss
Guanfang Dong
Anup Basu
MedIm
38
3
0
31 Oct 2023
Hidden Conflicts in Neural Networks and Their Implications for Explainability
Hidden Conflicts in Neural Networks and Their Implications for Explainability
Adam Dejl
Hamed Ayoobi
Hamed Ayoobi
Matthew Williams
Francesca Toni
FAttBDL
133
3
0
31 Oct 2023
Multiscale Feature Attribution for Outliers
Multiscale Feature Attribution for Outliers
Jeff Shen
Peter Melchior
13
0
0
30 Oct 2023
Explaining the Decisions of Deep Policy Networks for Robotic
  Manipulations
Explaining the Decisions of Deep Policy Networks for Robotic Manipulations
Seongun Kim
Jaesik Choi
48
4
0
30 Oct 2023
Previous
123...161718...565758
Next