Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1711.06104
Cited By
Towards better understanding of gradient-based attribution methods for Deep Neural Networks
16 November 2017
Marco Ancona
Enea Ceolini
Cengiz Öztireli
Markus Gross
FAtt
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Towards better understanding of gradient-based attribution methods for Deep Neural Networks"
30 / 30 papers shown
Title
Building Bridges, Not Walls -- Advancing Interpretability by Unifying Feature, Data, and Model Component Attribution
Shichang Zhang
Tessa Han
Usha Bhalla
Hima Lakkaraju
FAtt
150
0
0
17 Feb 2025
The Conformer Encoder May Reverse the Time Dimension
Robin Schmitt
Albert Zeyer
Mohammad Zeineldeen
Ralf Schluter
Hermann Ney
33
0
0
01 Oct 2024
Model Interpretation and Explainability: Towards Creating Transparency in Prediction Models
D. Kridel
Jacob Dineen
Daniel R. Dolk
David G. Castillo
21
4
0
31 May 2024
What Sketch Explainability Really Means for Downstream Tasks
Hmrishav Bandyopadhyay
Pinaki Nath Chowdhury
A. Bhunia
Aneeshan Sain
Tao Xiang
Yi-Zhe Song
30
4
0
14 Mar 2024
Towards Interpretable Classification of Leukocytes based on Deep Learning
S. Röhrl
Johannes Groll
M. Lengl
Simon Schumann
C. Klenk
D. Heim
Martin Knopp
Oliver Hayden
Klaus Diepold
27
2
0
24 Nov 2023
XAI-CLASS: Explanation-Enhanced Text Classification with Extremely Weak Supervision
Daniel Hajialigol
Hanwen Liu
Xuan Wang
VLM
21
5
0
31 Oct 2023
MAEA: Multimodal Attribution for Embodied AI
Vidhi Jain
Jayant Sravan Tamarapalli
Sahiti Yerramilli
Yonatan Bisk
39
0
0
25 Jul 2023
Lidar Line Selection with Spatially-Aware Shapley Value for Cost-Efficient Depth Completion
Kamil Adamczewski
Christos Sakaridis
Vaishakh Patil
Luc Van Gool
30
1
0
21 Mar 2023
Disentangled Explanations of Neural Network Predictions by Finding Relevant Subspaces
Pattarawat Chormai
J. Herrmann
Klaus-Robert Muller
G. Montavon
FAtt
48
17
0
30 Dec 2022
Evaluating Feature Attribution Methods for Electrocardiogram
J. Suh
Jimyeong Kim
Euna Jung
Wonjong Rhee
FAtt
17
2
0
23 Nov 2022
Human-Centered Concept Explanations for Neural Networks
Chih-Kuan Yeh
Been Kim
Pradeep Ravikumar
FAtt
34
25
0
25 Feb 2022
First is Better Than Last for Language Data Influence
Chih-Kuan Yeh
Ankur Taly
Mukund Sundararajan
Frederick Liu
Pradeep Ravikumar
TDI
30
20
0
24 Feb 2022
Morphological feature visualization of Alzheimer's disease via Multidirectional Perception GAN
Wen Yu
Baiying Lei
Yanyan Shen
Shuqiang Wang
Yong Liu
Z. Feng
Yong Hu
Michael K. Ng
GAN
MedIm
8
82
0
25 Nov 2021
Explainable Adversarial Attacks in Deep Neural Networks Using Activation Profiles
G. Cantareira
R. Mello
F. Paulovich
AAML
21
9
0
18 Mar 2021
Contrastive Graph Neural Network Explanation
Lukas Faber
A. K. Moghaddam
Roger Wattenhofer
29
36
0
26 Oct 2020
Adversarial Infidelity Learning for Model Interpretation
Jian Liang
Bing Bai
Yuren Cao
Kun Bai
Fei-Yue Wang
AAML
44
18
0
09 Jun 2020
SCOUT: Self-aware Discriminant Counterfactual Explanations
Pei Wang
Nuno Vasconcelos
FAtt
24
81
0
16 Apr 2020
Measuring and improving the quality of visual explanations
Agnieszka Grabska-Barwiñska
XAI
FAtt
16
3
0
14 Mar 2020
When Explanations Lie: Why Many Modified BP Attributions Fail
Leon Sixt
Maximilian Granz
Tim Landgraf
BDL
FAtt
XAI
13
132
0
20 Dec 2019
Input-Cell Attention Reduces Vanishing Saliency of Recurrent Neural Networks
Aya Abdelsalam Ismail
Mohamed K. Gunady
L. Pessoa
H. C. Bravo
S. Feizi
AI4TS
25
50
0
27 Oct 2019
Evaluating Explanation Without Ground Truth in Interpretable Machine Learning
Fan Yang
Mengnan Du
Xia Hu
XAI
ELM
27
66
0
16 Jul 2019
An Empirical Study towards Understanding How Deep Convolutional Nets Recognize Falls
Yan Zhang
Heiko Neumann
20
5
0
05 Dec 2018
Local Explanation Methods for Deep Neural Networks Lack Sensitivity to Parameter Values
Julius Adebayo
Justin Gilmer
Ian Goodfellow
Been Kim
FAtt
AAML
16
128
0
08 Oct 2018
Sanity Checks for Saliency Maps
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
FAtt
AAML
XAI
40
1,927
0
08 Oct 2018
Extractive Adversarial Networks: High-Recall Explanations for Identifying Personal Attacks in Social Media Posts
Samuel Carton
Qiaozhu Mei
Paul Resnick
FAtt
AAML
19
34
0
01 Sep 2018
Mitigating Sybils in Federated Learning Poisoning
Clement Fung
Chris J. M. Yoon
Ivan Beschastnikh
AAML
15
497
0
14 Aug 2018
Explaining Explanations: An Overview of Interpretability of Machine Learning
Leilani H. Gilpin
David Bau
Ben Z. Yuan
Ayesha Bajwa
Michael A. Specter
Lalana Kagal
XAI
40
1,840
0
31 May 2018
Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges
Gabrielle Ras
Marcel van Gerven
W. Haselager
XAI
17
217
0
20 Mar 2018
Methods for Interpreting and Understanding Deep Neural Networks
G. Montavon
Wojciech Samek
K. Müller
FaML
234
2,238
0
24 Jun 2017
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Yonghui Wu
M. Schuster
Z. Chen
Quoc V. Le
Mohammad Norouzi
...
Alex Rudnick
Oriol Vinyals
G. Corrado
Macduff Hughes
J. Dean
AIMat
716
6,746
0
26 Sep 2016
1