ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1711.06104
  4. Cited By
Towards better understanding of gradient-based attribution methods for
  Deep Neural Networks

Towards better understanding of gradient-based attribution methods for Deep Neural Networks

16 November 2017
Marco Ancona
Enea Ceolini
Cengiz Öztireli
Markus Gross
    FAtt
ArXivPDFHTML

Papers citing "Towards better understanding of gradient-based attribution methods for Deep Neural Networks"

30 / 30 papers shown
Title
Building Bridges, Not Walls -- Advancing Interpretability by Unifying Feature, Data, and Model Component Attribution
Building Bridges, Not Walls -- Advancing Interpretability by Unifying Feature, Data, and Model Component Attribution
Shichang Zhang
Tessa Han
Usha Bhalla
Hima Lakkaraju
FAtt
150
0
0
17 Feb 2025
The Conformer Encoder May Reverse the Time Dimension
The Conformer Encoder May Reverse the Time Dimension
Robin Schmitt
Albert Zeyer
Mohammad Zeineldeen
Ralf Schluter
Hermann Ney
36
0
0
01 Oct 2024
Model Interpretation and Explainability: Towards Creating Transparency
  in Prediction Models
Model Interpretation and Explainability: Towards Creating Transparency in Prediction Models
D. Kridel
Jacob Dineen
Daniel R. Dolk
David G. Castillo
23
4
0
31 May 2024
What Sketch Explainability Really Means for Downstream Tasks
What Sketch Explainability Really Means for Downstream Tasks
Hmrishav Bandyopadhyay
Pinaki Nath Chowdhury
A. Bhunia
Aneeshan Sain
Tao Xiang
Yi-Zhe Song
30
4
0
14 Mar 2024
Towards Interpretable Classification of Leukocytes based on Deep
  Learning
Towards Interpretable Classification of Leukocytes based on Deep Learning
S. Röhrl
Johannes Groll
M. Lengl
Simon Schumann
C. Klenk
D. Heim
Martin Knopp
Oliver Hayden
Klaus Diepold
30
2
0
24 Nov 2023
XAI-CLASS: Explanation-Enhanced Text Classification with Extremely Weak
  Supervision
XAI-CLASS: Explanation-Enhanced Text Classification with Extremely Weak Supervision
Daniel Hajialigol
Hanwen Liu
Xuan Wang
VLM
21
5
0
31 Oct 2023
MAEA: Multimodal Attribution for Embodied AI
MAEA: Multimodal Attribution for Embodied AI
Vidhi Jain
Jayant Sravan Tamarapalli
Sahiti Yerramilli
Yonatan Bisk
39
0
0
25 Jul 2023
Lidar Line Selection with Spatially-Aware Shapley Value for
  Cost-Efficient Depth Completion
Lidar Line Selection with Spatially-Aware Shapley Value for Cost-Efficient Depth Completion
Kamil Adamczewski
Daniel Gehrig
Vaishakh Patil
Luc Van Gool
33
1
0
21 Mar 2023
Disentangled Explanations of Neural Network Predictions by Finding
  Relevant Subspaces
Disentangled Explanations of Neural Network Predictions by Finding Relevant Subspaces
Pattarawat Chormai
J. Herrmann
Klaus-Robert Muller
G. Montavon
FAtt
48
17
0
30 Dec 2022
Evaluating Feature Attribution Methods for Electrocardiogram
Evaluating Feature Attribution Methods for Electrocardiogram
J. Suh
Jimyeong Kim
Euna Jung
Wonjong Rhee
FAtt
17
2
0
23 Nov 2022
Human-Centered Concept Explanations for Neural Networks
Human-Centered Concept Explanations for Neural Networks
Chih-Kuan Yeh
Been Kim
Pradeep Ravikumar
FAtt
37
25
0
25 Feb 2022
First is Better Than Last for Language Data Influence
First is Better Than Last for Language Data Influence
Chih-Kuan Yeh
Ankur Taly
Mukund Sundararajan
Frederick Liu
Pradeep Ravikumar
TDI
32
20
0
24 Feb 2022
Morphological feature visualization of Alzheimer's disease via
  Multidirectional Perception GAN
Morphological feature visualization of Alzheimer's disease via Multidirectional Perception GAN
Wen Yu
Baiying Lei
Yanyan Shen
Shuqiang Wang
Yong Liu
Z. Feng
Yong Hu
Michael K. Ng
GAN
MedIm
8
82
0
25 Nov 2021
Explainable Adversarial Attacks in Deep Neural Networks Using Activation
  Profiles
Explainable Adversarial Attacks in Deep Neural Networks Using Activation Profiles
G. Cantareira
R. Mello
F. Paulovich
AAML
24
9
0
18 Mar 2021
Contrastive Graph Neural Network Explanation
Contrastive Graph Neural Network Explanation
Lukas Faber
A. K. Moghaddam
Roger Wattenhofer
29
36
0
26 Oct 2020
Adversarial Infidelity Learning for Model Interpretation
Adversarial Infidelity Learning for Model Interpretation
Jian Liang
Bing Bai
Yuren Cao
Kun Bai
Fei-Yue Wang
AAML
46
18
0
09 Jun 2020
SCOUT: Self-aware Discriminant Counterfactual Explanations
SCOUT: Self-aware Discriminant Counterfactual Explanations
Pei Wang
Nuno Vasconcelos
FAtt
27
81
0
16 Apr 2020
Measuring and improving the quality of visual explanations
Measuring and improving the quality of visual explanations
Agnieszka Grabska-Barwiñska
XAI
FAtt
18
3
0
14 Mar 2020
When Explanations Lie: Why Many Modified BP Attributions Fail
When Explanations Lie: Why Many Modified BP Attributions Fail
Leon Sixt
Maximilian Granz
Tim Landgraf
BDL
FAtt
XAI
13
132
0
20 Dec 2019
Input-Cell Attention Reduces Vanishing Saliency of Recurrent Neural
  Networks
Input-Cell Attention Reduces Vanishing Saliency of Recurrent Neural Networks
Aya Abdelsalam Ismail
Mohamed K. Gunady
L. Pessoa
H. C. Bravo
S. Feizi
AI4TS
25
50
0
27 Oct 2019
Evaluating Explanation Without Ground Truth in Interpretable Machine
  Learning
Evaluating Explanation Without Ground Truth in Interpretable Machine Learning
Fan Yang
Mengnan Du
Xia Hu
XAI
ELM
27
66
0
16 Jul 2019
An Empirical Study towards Understanding How Deep Convolutional Nets
  Recognize Falls
An Empirical Study towards Understanding How Deep Convolutional Nets Recognize Falls
Yan Zhang
Heiko Neumann
23
5
0
05 Dec 2018
Local Explanation Methods for Deep Neural Networks Lack Sensitivity to
  Parameter Values
Local Explanation Methods for Deep Neural Networks Lack Sensitivity to Parameter Values
Julius Adebayo
Justin Gilmer
Ian Goodfellow
Been Kim
FAtt
AAML
19
128
0
08 Oct 2018
Sanity Checks for Saliency Maps
Sanity Checks for Saliency Maps
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
FAtt
AAML
XAI
61
1,927
0
08 Oct 2018
Extractive Adversarial Networks: High-Recall Explanations for
  Identifying Personal Attacks in Social Media Posts
Extractive Adversarial Networks: High-Recall Explanations for Identifying Personal Attacks in Social Media Posts
Samuel Carton
Qiaozhu Mei
Paul Resnick
FAtt
AAML
19
34
0
01 Sep 2018
Mitigating Sybils in Federated Learning Poisoning
Mitigating Sybils in Federated Learning Poisoning
Clement Fung
Chris J. M. Yoon
Ivan Beschastnikh
AAML
15
497
0
14 Aug 2018
Explaining Explanations: An Overview of Interpretability of Machine
  Learning
Explaining Explanations: An Overview of Interpretability of Machine Learning
Leilani H. Gilpin
David Bau
Ben Z. Yuan
Ayesha Bajwa
Michael A. Specter
Lalana Kagal
XAI
40
1,840
0
31 May 2018
Explanation Methods in Deep Learning: Users, Values, Concerns and
  Challenges
Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges
Gabrielle Ras
Marcel van Gerven
W. Haselager
XAI
17
217
0
20 Mar 2018
Methods for Interpreting and Understanding Deep Neural Networks
Methods for Interpreting and Understanding Deep Neural Networks
G. Montavon
Wojciech Samek
K. Müller
FaML
234
2,238
0
24 Jun 2017
Google's Neural Machine Translation System: Bridging the Gap between
  Human and Machine Translation
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Yonghui Wu
M. Schuster
Z. Chen
Quoc V. Le
Mohammad Norouzi
...
Alex Rudnick
Oriol Vinyals
G. Corrado
Macduff Hughes
J. Dean
AIMat
716
6,746
0
26 Sep 2016
1