ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1703.01365
  4. Cited By
Axiomatic Attribution for Deep Networks
v1v2 (latest)

Axiomatic Attribution for Deep Networks

4 March 2017
Mukund Sundararajan
Ankur Taly
Qiqi Yan
    OODFAtt
ArXiv (abs)PDFHTML

Papers citing "Axiomatic Attribution for Deep Networks"

50 / 2,871 papers shown
Title
Fixing confirmation bias in feature attribution methods via semantic
  match
Fixing confirmation bias in feature attribution methods via semantic match
Giovanni Cina
Daniel Fernandez-Llaneza
Ludovico Deponte
Nishant Mishra
Tabea E. Rober
Sandro Pezzelle
Iacer Calixto
Rob Goedhart
cS. .Ilker Birbil
FAtt
78
1
0
03 Jul 2023
Structured Network Pruning by Measuring Filter-wise Interactions
Structured Network Pruning by Measuring Filter-wise Interactions
Wenting Tang
Xingxing Wei
Yue Liu
47
0
0
03 Jul 2023
Identifying Important Sensory Feedback for Learning Locomotion Skills
Identifying Important Sensory Feedback for Learning Locomotion Skills
Wanming Yu
Chuanyu Yang
C. McGreavy
Eleftherios Triantafyllidis
Guillaume Bellegarda
M. Shafiee
A. Ijspeert
Zhibin Li
85
16
0
29 Jun 2023
An end-to-end framework for gene expression classification by
  integrating a background knowledge graph: application to cancer prognosis
  prediction
An end-to-end framework for gene expression classification by integrating a background knowledge graph: application to cancer prognosis prediction
Kazuma Inoue
Ryosuke Kojima
M. Kamada
Yasushi Okuno
35
0
0
29 Jun 2023
Increasing Performance And Sample Efficiency With Model-agnostic
  Interactive Feature Attributions
Increasing Performance And Sample Efficiency With Model-agnostic Interactive Feature Attributions
J. Michiels
Marina De Vos
Johan A. K. Suykens
LRMFAtt
73
0
0
28 Jun 2023
An Empirical Evaluation of the Rashomon Effect in Explainable Machine
  Learning
An Empirical Evaluation of the Rashomon Effect in Explainable Machine Learning
Sebastian Müller
Vanessa Toborek
Katharina Beckh
Matthias Jakobs
Christian Bauckhage
Pascal Welke
FAtt
122
17
0
27 Jun 2023
Four Axiomatic Characterizations of the Integrated Gradients Attribution
  Method
Four Axiomatic Characterizations of the Integrated Gradients Attribution Method
Daniel Lundstrom
Meisam Razaviyayn
FAtt
51
3
0
23 Jun 2023
Pre or Post-Softmax Scores in Gradient-based Attribution Methods, What
  is Best?
Pre or Post-Softmax Scores in Gradient-based Attribution Methods, What is Best?
Miguel A. Lerma
Mirtha Lucas
FAtt
94
3
0
22 Jun 2023
Towards Explainable Evaluation Metrics for Machine Translation
Towards Explainable Evaluation Metrics for Machine Translation
Christoph Leiter
Piyawat Lertvittayakumjorn
M. Fomicheva
Wei Zhao
Yang Gao
Steffen Eger
ELM
104
15
0
22 Jun 2023
XAI-TRIS: Non-linear image benchmarks to quantify false positive
  post-hoc attribution of feature importance
XAI-TRIS: Non-linear image benchmarks to quantify false positive post-hoc attribution of feature importance
Benedict Clark
Rick Wilming
Stefan Haufe
100
5
0
22 Jun 2023
Identifying and Disentangling Spurious Features in Pretrained Image
  Representations
Identifying and Disentangling Spurious Features in Pretrained Image Representations
R. Darbinyan
Hrayr Harutyunyan
Aram H. Markosyan
Hrant Khachatrian
62
3
0
22 Jun 2023
Evaluating the overall sensitivity of saliency-based explanation methods
Evaluating the overall sensitivity of saliency-based explanation methods
Harshinee Sriram
Cristina Conati
AAMLXAIFAtt
113
0
0
21 Jun 2023
Feature Interactions Reveal Linguistic Structure in Language Models
Feature Interactions Reveal Linguistic Structure in Language Models
Jaap Jumelet
Willem H. Zuidema
FAtt
61
7
0
21 Jun 2023
Benchmark data to study the influence of pre-training on explanation
  performance in MR image classification
Benchmark data to study the influence of pre-training on explanation performance in MR image classification
Marta Oliveira
Rick Wilming
Benedict Clark
Céline Budding
Fabian Eitel
K. Ritter
Stefan Haufe
57
1
0
21 Jun 2023
Evaluation of Popular XAI Applied to Clinical Prediction Models: Can
  They be Trusted?
Evaluation of Popular XAI Applied to Clinical Prediction Models: Can They be Trusted?
A. Brankovic
David Cook
Jessica Rahman
Wenjie Huang
Sankalp Khanna
68
1
0
21 Jun 2023
Computing a human-like reaction time metric from stable recurrent vision
  models
Computing a human-like reaction time metric from stable recurrent vision models
L. Goetschalckx
L. Govindarajan
A. Ashok
A. Ahuja
David L. Sheinberg
Thomas Serre
65
9
0
20 Jun 2023
Did the Models Understand Documents? Benchmarking Models for Language
  Understanding in Document-Level Relation Extraction
Did the Models Understand Documents? Benchmarking Models for Language Understanding in Document-Level Relation Extraction
Haotian Chen
Bingsheng Chen
Xiangdong Zhou
101
8
0
20 Jun 2023
A Novel Counterfactual Data Augmentation Method for Aspect-Based
  Sentiment Analysis
A Novel Counterfactual Data Augmentation Method for Aspect-Based Sentiment Analysis
Dongming Wu
Lulu Wen
Chao Chen
Zhaoshu Shi
83
3
0
20 Jun 2023
A Lightweight Generative Model for Interpretable Subject-level
  Prediction
A Lightweight Generative Model for Interpretable Subject-level Prediction
C. Mauri
Stefano Cerri
Oula Puonti
Mark Muhlau
Koen van Leemput
MedImAI4CE
126
0
0
19 Jun 2023
B-cos Alignment for Inherently Interpretable CNNs and Vision
  Transformers
B-cos Alignment for Inherently Interpretable CNNs and Vision Transformers
Moritz D Boehle
Navdeeppal Singh
Mario Fritz
Bernt Schiele
166
27
0
19 Jun 2023
Detection of Sensor-To-Sensor Variations using Explainable AI
Detection of Sensor-To-Sensor Variations using Explainable AI
Sarah Seifi
Sebastian A. Schober
Cecilia Carbonelli
Lorenzo Servadei
Robert Wille
52
0
0
19 Jun 2023
Cross-Domain Toxic Spans Detection
Cross-Domain Toxic Spans Detection
Stefan F. Schouten
Baran Barbarestani
Wondimagegnhue Tufa
Piek Vossen
I. Markov
42
4
0
16 Jun 2023
Can Language Models Teach Weaker Agents? Teacher Explanations Improve
  Students via Personalization
Can Language Models Teach Weaker Agents? Teacher Explanations Improve Students via Personalization
Swarnadeep Saha
Peter Hase
Mohit Bansal
LRM
80
11
0
15 Jun 2023
Improving Explainability of Disentangled Representations using
  Multipath-Attribution Mappings
Improving Explainability of Disentangled Representations using Multipath-Attribution Mappings
Lukas Klein
João B. S. Carvalho
Mennatallah El-Assady
Paolo Penna
J. M. Buhmann
Paul F. Jaeger
48
4
0
15 Jun 2023
Explaining Explainability: Towards Deeper Actionable Insights into Deep
  Learning through Second-order Explainability
Explaining Explainability: Towards Deeper Actionable Insights into Deep Learning through Second-order Explainability
E. Z. Zeng
Hayden Gunraj
Sheldon Fernandez
Alexander Wong
XAI
44
1
0
14 Jun 2023
Reliable Evaluation of Adversarial Transferability
Reliable Evaluation of Adversarial Transferability
Wenqian Yu
Jindong Gu
Zhijiang Li
Philip Torr
AAML
97
8
0
14 Jun 2023
On the Robustness of Removal-Based Feature Attributions
On the Robustness of Removal-Based Feature Attributions
Christy Lin
Ian Covert
Su-In Lee
125
13
0
12 Jun 2023
Unlocking Feature Visualization for Deeper Networks with MAgnitude
  Constrained Optimization
Unlocking Feature Visualization for Deeper Networks with MAgnitude Constrained Optimization
Thomas Fel
Thibaut Boissin
Victor Boutin
Agustin Picard
Paul Novello
...
Drew Linsley
Tom Rousseau
Rémi Cadène
Laurent Gardes
Thomas Serre
FAtt
92
22
0
11 Jun 2023
A Holistic Approach to Unifying Automatic Concept Extraction and Concept
  Importance Estimation
A Holistic Approach to Unifying Automatic Concept Extraction and Concept Importance Estimation
Thomas Fel
Victor Boutin
Mazda Moayeri
Rémi Cadène
Louis Bethune
Léo Andéol
Mathieu Chalvidal
Thomas Serre
FAtt
99
64
0
11 Jun 2023
On Minimizing the Impact of Dataset Shifts on Actionable Explanations
On Minimizing the Impact of Dataset Shifts on Actionable Explanations
Anna P. Meyer
Dan Ley
Suraj Srinivas
Himabindu Lakkaraju
FAtt
69
6
0
11 Jun 2023
Self-Interpretable Time Series Prediction with Counterfactual
  Explanations
Self-Interpretable Time Series Prediction with Counterfactual Explanations
Jingquan Yan
Hao Wang
BDLAI4TS
76
14
0
09 Jun 2023
Strategies to exploit XAI to improve classification systems
Strategies to exploit XAI to improve classification systems
Andrea Apicella
Luca Di Lorenzo
Francesco Isgrò
A. Pollastro
R. Prevete
33
11
0
09 Jun 2023
Multimodal Explainable Artificial Intelligence: A Comprehensive Review
  of Methodological Advances and Future Research Directions
Multimodal Explainable Artificial Intelligence: A Comprehensive Review of Methodological Advances and Future Research Directions
N. Rodis
Christos Sardianos
Panagiotis I. Radoglou-Grammatikis
Panagiotis G. Sarigiannidis
Iraklis Varlamis
Georgios Th. Papadopoulos
111
24
0
09 Jun 2023
Explaining Predictive Uncertainty with Information Theoretic Shapley
  Values
Explaining Predictive Uncertainty with Information Theoretic Shapley Values
David S. Watson
Joshua O'Hara
Niek Tax
Richard Mudd
Ido Guy
TDIFAtt
70
24
0
09 Jun 2023
Sound Explanation for Trustworthy Machine Learning
Sound Explanation for Trustworthy Machine Learning
Kai Jia
Pasapol Saowakon
L. Appelbaum
Martin Rinard
XAIFAttFaML
58
2
0
08 Jun 2023
Robust Explainer Recommendation for Time Series Classification
Robust Explainer Recommendation for Time Series Classification
Thu Trang Nguyen
Thach le Nguyen
Georgiana Ifrim
AI4TS
99
6
0
08 Jun 2023
Interpretable Deep Clustering for Tabular Data
Interpretable Deep Clustering for Tabular Data
Jonathan Svirsky
Ofir Lindenbaum
85
6
0
07 Jun 2023
Don't trust your eyes: on the (un)reliability of feature visualizations
Don't trust your eyes: on the (un)reliability of feature visualizations
Robert Geirhos
Roland S. Zimmermann
Blair Bilodeau
Wieland Brendel
Been Kim
FAttOOD
131
31
0
07 Jun 2023
Multimodal Learning Without Labeled Multimodal Data: Guarantees and
  Applications
Multimodal Learning Without Labeled Multimodal Data: Guarantees and Applications
Paul Pu Liang
Chun Kai Ling
Yun Cheng
A. Obolenskiy
Yudong Liu
Rohan Pandey
Alex Wilf
Louis-Philippe Morency
Ruslan Salakhutdinov
OffRL
81
12
0
07 Jun 2023
On the Detectability of ChatGPT Content: Benchmarking, Methodology, and
  Evaluation through the Lens of Academic Writing
On the Detectability of ChatGPT Content: Benchmarking, Methodology, and Evaluation through the Lens of Academic Writing
Zeyan Liu
Zijun Yao
Fengjun Li
Bo Luo
DeLMO
93
23
0
07 Jun 2023
Adversarial attacks and defenses in explainable artificial intelligence:
  A survey
Adversarial attacks and defenses in explainable artificial intelligence: A survey
Hubert Baniecki
P. Biecek
AAML
146
71
0
06 Jun 2023
Time Interpret: a Unified Model Interpretability Library for Time Series
Time Interpret: a Unified Model Interpretability Library for Time Series
Joseph Enguehard
FAttAI4TS
70
4
0
05 Jun 2023
Neuron Activation Coverage: Rethinking Out-of-distribution Detection and
  Generalization
Neuron Activation Coverage: Rethinking Out-of-distribution Detection and Generalization
Yebin Liu
Chris Xing Tian
Haoliang Li
Lei Ma
Shiqi Wang
UQCV
98
22
0
05 Jun 2023
DecompX: Explaining Transformers Decisions by Propagating Token
  Decomposition
DecompX: Explaining Transformers Decisions by Propagating Token Decomposition
Ali Modarressi
Mohsen Fayyaz
Ehsan Aghazadeh
Yadollah Yaghoobzadeh
Mohammad Taher Pilehvar
100
28
0
05 Jun 2023
Input-gradient space particle inference for neural network ensembles
Input-gradient space particle inference for neural network ensembles
Trung Trinh
Markus Heinonen
Luigi Acerbi
Samuel Kaski
UQCV
75
4
0
05 Jun 2023
Sanity Checks for Saliency Methods Explaining Object Detectors
Sanity Checks for Saliency Methods Explaining Object Detectors
Deepan Padmanabhan
Paul G. Plöger
Octavio Arriaga
Matias Valdenegro-Toro
FAttAAMLXAI
57
2
0
04 Jun 2023
Encoding Time-Series Explanations through Self-Supervised Model Behavior
  Consistency
Encoding Time-Series Explanations through Self-Supervised Model Behavior Consistency
Owen Queen
Thomas Hartvigsen
Teddy Koker
Huan He
Theodoros Tsiligkaridis
Marinka Zitnik
AI4TS
98
21
0
03 Jun 2023
Painsight: An Extendable Opinion Mining Framework for Detecting Pain
  Points Based on Online Customer Reviews
Painsight: An Extendable Opinion Mining Framework for Detecting Pain Points Based on Online Customer Reviews
Yukyung Lee
Jaehee Kim
Doyoon Kim
Yoo-Seok Kho
Younsun Kim
Pilsung Kang
74
0
0
03 Jun 2023
A Survey on Explainability of Graph Neural Networks
A Survey on Explainability of Graph Neural Networks
Jaykumar Kakkad
Jaspal Jannu
Kartik Sharma
Charu C. Aggarwal
Sourav Medya
68
28
0
02 Jun 2023
Theoretical Behavior of XAI Methods in the Presence of Suppressor
  Variables
Theoretical Behavior of XAI Methods in the Presence of Suppressor Variables
Rick Wilming
Leo Kieslich
Benedict Clark
Stefan Haufe
77
10
0
02 Jun 2023
Previous
123...212223...565758
Next