ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1703.01365
  4. Cited By
Axiomatic Attribution for Deep Networks
v1v2 (latest)

Axiomatic Attribution for Deep Networks

4 March 2017
Mukund Sundararajan
Ankur Taly
Qiqi Yan
    OODFAtt
ArXiv (abs)PDFHTML

Papers citing "Axiomatic Attribution for Deep Networks"

50 / 2,873 papers shown
Title
An End-to-End Set Transformer for User-Level Classification of
  Depression and Gambling Disorder
An End-to-End Set Transformer for User-Level Classification of Depression and Gambling Disorder
Ana-Maria Bucur
Adrian Cosma
Liviu P. Dinu
Paolo Rosso
65
8
0
02 Jul 2022
PhilaeX: Explaining the Failure and Success of AI Models in Malware
  Detection
PhilaeX: Explaining the Failure and Success of AI Models in Malware Detection
Zhi Lu
V. Thing
AAML
25
5
0
02 Jul 2022
Understanding Instance-Level Impact of Fairness Constraints
Understanding Instance-Level Impact of Fairness Constraints
Jialu Wang
Xinze Wang
Yang Liu
TDIFaML
108
34
0
30 Jun 2022
Distilling Model Failures as Directions in Latent Space
Distilling Model Failures as Directions in Latent Space
Saachi Jain
Hannah Lawrence
Ankur Moitra
Aleksander Madry
100
90
0
29 Jun 2022
Private Graph Extraction via Feature Explanations
Private Graph Extraction via Feature Explanations
Iyiola E. Olatunji
Mandeep Rathee
Thorben Funke
Megha Khosla
AAMLFAtt
74
12
0
29 Jun 2022
TE2Rules: Explaining Tree Ensembles using Rules
TE2Rules: Explaining Tree Ensembles using Rules
G. R. Lal
Xiaotong Chen
Varun Mithal
68
3
0
29 Jun 2022
On the amplification of security and privacy risks by post-hoc
  explanations in machine learning models
On the amplification of security and privacy risks by post-hoc explanations in machine learning models
Pengrui Quan
Supriyo Chakraborty
J. Jeyakumar
Mani B. Srivastava
MIACVAAML
93
5
0
28 Jun 2022
BAGEL: A Benchmark for Assessing Graph Neural Network Explanations
BAGEL: A Benchmark for Assessing Graph Neural Network Explanations
Mandeep Rathee
Thorben Funke
Avishek Anand
Megha Khosla
75
15
0
28 Jun 2022
Explaining Any ML Model? -- On Goals and Capabilities of XAI
Explaining Any ML Model? -- On Goals and Capabilities of XAI
Moritz Renftle
Holger Trittenbach
M. Poznic
Reinhard Heil
ELM
77
6
0
28 Jun 2022
When are Post-hoc Conceptual Explanations Identifiable?
When are Post-hoc Conceptual Explanations Identifiable?
Tobias Leemann
Michael Kirchhof
Yao Rong
Enkelejda Kasneci
Gjergji Kasneci
126
12
0
28 Jun 2022
Auditing Visualizations: Transparency Methods Struggle to Detect
  Anomalous Behavior
Auditing Visualizations: Transparency Methods Struggle to Detect Anomalous Behavior
Jean-Stanislas Denain
Jacob Steinhardt
AAML
106
7
0
27 Jun 2022
Thermodynamics-inspired Explanations of Artificial Intelligence
Thermodynamics-inspired Explanations of Artificial Intelligence
S. Mehdi
P. Tiwary
AI4CE
70
18
0
27 Jun 2022
Discovering Salient Neurons in Deep NLP Models
Discovering Salient Neurons in Deep NLP Models
Nadir Durrani
Fahim Dalvi
Hassan Sajjad
KELMMILM
114
16
0
27 Jun 2022
Explaining the root causes of unit-level changes
Explaining the root causes of unit-level changes
Kailash Budhathoki
George Michailidis
Dominik Janzing
FAtt
61
4
0
26 Jun 2022
Robustness of Explanation Methods for NLP Models
Robustness of Explanation Methods for NLP Models
Shriya Atmakuri
Tejas Chheda
Dinesh Kandula
Nishant Yadav
Taesung Lee
Hessel Tuinhof
FAttAAML
72
4
0
24 Jun 2022
VisFIS: Visual Feature Importance Supervision with
  Right-for-the-Right-Reason Objectives
VisFIS: Visual Feature Importance Supervision with Right-for-the-Right-Reason Objectives
Zhuofan Ying
Peter Hase
Joey Tianyi Zhou
LRM
87
13
0
22 Jun 2022
Explanation-based Counterfactual Retraining(XCR): A Calibration Method
  for Black-box Models
Explanation-based Counterfactual Retraining(XCR): A Calibration Method for Black-box Models
Liu Zhendong
Wenyu Jiang
Yan Zhang
Chongjun Wang
CML
40
0
0
22 Jun 2022
OpenXAI: Towards a Transparent Evaluation of Model Explanations
OpenXAI: Towards a Transparent Evaluation of Model Explanations
Chirag Agarwal
Dan Ley
Satyapriya Krishna
Eshika Saxena
Martin Pawelczyk
Nari Johnson
Isha Puri
Marinka Zitnik
Himabindu Lakkaraju
XAI
148
147
0
22 Jun 2022
Understanding Robust Learning through the Lens of Representation
  Similarities
Understanding Robust Learning through the Lens of Representation Similarities
Christian Cianfarani
A. Bhagoji
Vikash Sehwag
Ben Y. Zhao
Prateek Mittal
Haitao Zheng
OOD
81
16
0
20 Jun 2022
Visualizing and Understanding Contrastive Learning
Visualizing and Understanding Contrastive Learning
Fawaz Sammani
Boris Joukovsky
Nikos Deligiannis
SSLFAtt
96
9
0
20 Jun 2022
GraphFramEx: Towards Systematic Evaluation of Explainability Methods for
  Graph Neural Networks
GraphFramEx: Towards Systematic Evaluation of Explainability Methods for Graph Neural Networks
Kenza Amara
Rex Ying
Zitao Zhang
Zhihao Han
Yinan Shan
U. Brandes
S. Schemm
Ce Zhang
88
57
0
20 Jun 2022
FD-CAM: Improving Faithfulness and Discriminability of Visual
  Explanation for CNNs
FD-CAM: Improving Faithfulness and Discriminability of Visual Explanation for CNNs
Hui Li
Zihao Li
Rui Ma
Tieru Wu
FAtt
47
9
0
17 Jun 2022
Accelerating Shapley Explanation via Contributive Cooperator Selection
Accelerating Shapley Explanation via Contributive Cooperator Selection
Guanchu Wang
Yu-Neng Chuang
Mengnan Du
Fan Yang
Quan-Gen Zhou
Pushkar Tripathi
Xuanting Cai
Helen Zhou
FAtt
93
22
0
17 Jun 2022
Quantifying Feature Contributions to Overall Disparity Using Information
  Theory
Quantifying Feature Contributions to Overall Disparity Using Information Theory
Sanghamitra Dutta
Praveen Venkatesh
P. Grover
FAtt
55
5
0
16 Jun 2022
Benchmarking Heterogeneous Treatment Effect Models through the Lens of
  Interpretability
Benchmarking Heterogeneous Treatment Effect Models through the Lens of Interpretability
Jonathan Crabbé
Alicia Curth
Ioana Bica
M. Schaar
CML
112
16
0
16 Jun 2022
Inherent Inconsistencies of Feature Importance
Inherent Inconsistencies of Feature Importance
Nimrod Harel
Uri Obolski
Ran Gilad-Bachrach
FAtt
40
0
0
16 Jun 2022
Towards ML Methods for Biodiversity: A Novel Wild Bee Dataset and
  Evaluations of XAI Methods for ML-Assisted Rare Species Annotations
Towards ML Methods for Biodiversity: A Novel Wild Bee Dataset and Evaluations of XAI Methods for ML-Assisted Rare Species Annotations
Teodor Chiaburu
F. Biessmann
Frank Haußer
59
2
0
15 Jun 2022
The Manifold Hypothesis for Gradient-Based Explanations
The Manifold Hypothesis for Gradient-Based Explanations
Sebastian Bordt
Uddeshya Upadhyay
Zeynep Akata
U. V. Luxburg
FAttAAML
75
14
0
15 Jun 2022
Attributions Beyond Neural Networks: The Linear Program Case
Attributions Beyond Neural Networks: The Linear Program Case
Florian Peter Busch
Matej Zečević
Kristian Kersting
Devendra Singh Dhami
FAtt
59
0
0
14 Jun 2022
Machines Explaining Linear Programs
Machines Explaining Linear Programs
David Steinmann
Matej Zečević
Devendra Singh Dhami
Kristian Kersting
FAtt
26
0
0
14 Jun 2022
Self-Supervision on Images and Text Reduces Reliance on Visual Shortcut
  Features
Self-Supervision on Images and Text Reduces Reliance on Visual Shortcut Features
Anil Palepu
Andrew L. Beam
OODVLM
51
5
0
14 Jun 2022
On the explainable properties of 1-Lipschitz Neural Networks: An Optimal
  Transport Perspective
On the explainable properties of 1-Lipschitz Neural Networks: An Optimal Transport Perspective
M. Serrurier
Franck Mamalet
Thomas Fel
Louis Bethune
Thibaut Boissin
AAMLFAtt
82
6
0
14 Jun 2022
Making Sense of Dependence: Efficient Black-box Explanations Using
  Dependence Measure
Making Sense of Dependence: Efficient Black-box Explanations Using Dependence Measure
Paul Novello
Thomas Fel
David Vigouroux
FAtt
85
29
0
13 Jun 2022
Geometrically Guided Integrated Gradients
Geometrically Guided Integrated Gradients
Md. Mahfuzur Rahman
N. Lewis
Sergey Plis
FAttAAML
34
0
0
13 Jun 2022
A Functional Information Perspective on Model Interpretation
A Functional Information Perspective on Model Interpretation
Itai Gat
Nitay Calderon
Roi Reichart
Tamir Hazan
AAMLFAtt
84
6
0
12 Jun 2022
Diffeomorphic Counterfactuals with Generative Models
Diffeomorphic Counterfactuals with Generative Models
Ann-Kathrin Dombrowski
Jan E. Gerken
Klaus-Robert Muller
Pan Kessel
DiffMBDL
125
17
0
10 Jun 2022
GAMR: A Guided Attention Model for (visual) Reasoning
GAMR: A Guided Attention Model for (visual) Reasoning
Mohit Vaishnav
Thomas Serre
LRM
88
16
0
10 Jun 2022
Learning to Estimate Shapley Values with Vision Transformers
Learning to Estimate Shapley Values with Vision Transformers
Ian Covert
Chanwoo Kim
Su-In Lee
FAtt
89
39
0
10 Jun 2022
On the Bias-Variance Characteristics of LIME and SHAP in High Sparsity
  Movie Recommendation Explanation Tasks
On the Bias-Variance Characteristics of LIME and SHAP in High Sparsity Movie Recommendation Explanation Tasks
Claudia V. Roberts
Ehtsham Elahi
Ashok Chandrashekar
FAtt
63
4
0
09 Jun 2022
STNDT: Modeling Neural Population Activity with a Spatiotemporal
  Transformer
STNDT: Modeling Neural Population Activity with a Spatiotemporal Transformer
Trung Le
Eli Shlizerman
73
24
0
09 Jun 2022
DORA: Exploring Outlier Representations in Deep Neural Networks
DORA: Exploring Outlier Representations in Deep Neural Networks
Kirill Bykov
Mayukh Deb
Dennis Grinwald
Klaus-Robert Muller
Marina M.-C. Höhne
123
13
0
09 Jun 2022
Xplique: A Deep Learning Explainability Toolbox
Xplique: A Deep Learning Explainability Toolbox
Thomas Fel
Lucas Hervier
David Vigouroux
Antonin Poché
Justin Plakoo
...
Agustin Picard
C. Nicodeme
Laurent Gardes
G. Flandin
Thomas Serre
65
30
0
09 Jun 2022
Challenges in Applying Explainability Methods to Improve the Fairness of
  NLP Models
Challenges in Applying Explainability Methods to Improve the Fairness of NLP Models
Esma Balkir
S. Kiritchenko
I. Nejadgholi
Kathleen C. Fraser
96
37
0
08 Jun 2022
Do We Need Another Explainable AI Method? Toward Unifying Post-hoc XAI
  Evaluation Methods into an Interactive and Multi-dimensional Benchmark
Do We Need Another Explainable AI Method? Toward Unifying Post-hoc XAI Evaluation Methods into an Interactive and Multi-dimensional Benchmark
Mohamed Karim Belaid
Eyke Hüllermeier
Maximilian Rabus
Ralf Krestel
ELM
50
0
0
08 Jun 2022
Balanced background and explanation data are needed in explaining deep
  learning models with SHAP: An empirical study on clinical decision making
Balanced background and explanation data are needed in explaining deep learning models with SHAP: An empirical study on clinical decision making
Mingxuan Liu
Yilin Ning
Han Yuan
M. Ong
Nan Liu
FAtt
45
1
0
08 Jun 2022
From Attribution Maps to Human-Understandable Explanations through
  Concept Relevance Propagation
From Attribution Maps to Human-Understandable Explanations through Concept Relevance Propagation
Reduan Achtibat
Maximilian Dreyer
Ilona Eisenbraun
S. Bosse
Thomas Wiegand
Wojciech Samek
Sebastian Lapuschkin
FAtt
87
150
0
07 Jun 2022
Fooling Explanations in Text Classifiers
Fooling Explanations in Text Classifiers
Adam Ivankay
Ivan Girardi
Chiara Marchiori
P. Frossard
AAML
85
20
0
07 Jun 2022
Saliency Cards: A Framework to Characterize and Compare Saliency Methods
Saliency Cards: A Framework to Characterize and Compare Saliency Methods
Angie Boggust
Harini Suresh
Hendrik Strobelt
John Guttag
Arvindmani Satyanarayan
FAttXAI
85
10
0
07 Jun 2022
Self-supervised Learning for Human Activity Recognition Using 700,000
  Person-days of Wearable Data
Self-supervised Learning for Human Activity Recognition Using 700,000 Person-days of Wearable Data
H. Yuan
Shing Chan
Andrew P. Creagh
C. Tong
Aidan Acquah
David Clifton
Aiden Doherty
SSL
107
94
0
06 Jun 2022
A Human-Centric Take on Model Monitoring
A Human-Centric Take on Model Monitoring
Murtuza N. Shergadwala
Himabindu Lakkaraju
K. Kenthapadi
97
11
0
06 Jun 2022
Previous
123...313233...565758
Next