ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1805.07039
  4. Cited By
A Theoretical Explanation for Perplexing Behaviors of
  Backpropagation-based Visualizations

A Theoretical Explanation for Perplexing Behaviors of Backpropagation-based Visualizations

18 May 2018
Weili Nie
Yang Zhang
Ankit B. Patel
    FAtt
ArXivPDFHTML

Papers citing "A Theoretical Explanation for Perplexing Behaviors of Backpropagation-based Visualizations"

28 / 28 papers shown
Title
Navigating the Maze of Explainable AI: A Systematic Approach to Evaluating Methods and Metrics
Navigating the Maze of Explainable AI: A Systematic Approach to Evaluating Methods and Metrics
Lukas Klein
Carsten T. Lüth
U. Schlegel
Till J. Bungert
Mennatallah El-Assady
Paul F. Jäger
XAI
ELM
42
2
0
03 Jan 2025
The Representational Status of Deep Learning Models
The Representational Status of Deep Learning Models
Eamon Duede
19
0
0
21 Mar 2023
On The Coherence of Quantitative Evaluation of Visual Explanations
On The Coherence of Quantitative Evaluation of Visual Explanations
Benjamin Vandersmissen
José Oramas
XAI
FAtt
34
3
0
14 Feb 2023
Explainable AI for Bioinformatics: Methods, Tools, and Applications
Explainable AI for Bioinformatics: Methods, Tools, and Applications
Md. Rezaul Karim
Tanhim Islam
Oya Beyan
Christoph Lange
Michael Cochez
Dietrich-Rebholz Schuhmann
Stefan Decker
29
68
0
25 Dec 2022
Impossibility Theorems for Feature Attribution
Impossibility Theorems for Feature Attribution
Blair Bilodeau
Natasha Jaques
Pang Wei Koh
Been Kim
FAtt
20
68
0
22 Dec 2022
On the Relationship Between Explanation and Prediction: A Causal View
On the Relationship Between Explanation and Prediction: A Causal View
Amir-Hossein Karimi
Krikamol Muandet
Simon Kornblith
Bernhard Schölkopf
Been Kim
FAtt
CML
34
14
0
13 Dec 2022
Comparing the Decision-Making Mechanisms by Transformers and CNNs via
  Explanation Methods
Comparing the Decision-Making Mechanisms by Transformers and CNNs via Explanation Methods
Ming-Xiu Jiang
Saeed Khorram
Li Fuxin
FAtt
22
9
0
13 Dec 2022
What Makes a Good Explanation?: A Harmonized View of Properties of
  Explanations
What Makes a Good Explanation?: A Harmonized View of Properties of Explanations
Zixi Chen
Varshini Subhash
Marton Havasi
Weiwei Pan
Finale Doshi-Velez
XAI
FAtt
33
18
0
10 Nov 2022
A Functional Information Perspective on Model Interpretation
A Functional Information Perspective on Model Interpretation
Itai Gat
Nitay Calderon
Roi Reichart
Tamir Hazan
AAML
FAtt
33
6
0
12 Jun 2022
A Fine-grained Interpretability Evaluation Benchmark for Neural NLP
A Fine-grained Interpretability Evaluation Benchmark for Neural NLP
Lijie Wang
Yaozong Shen
Shu-ping Peng
Shuai Zhang
Xinyan Xiao
Hao Liu
Hongxuan Tang
Ying Chen
Hua-Hong Wu
Haifeng Wang
ELM
19
21
0
23 May 2022
ExSum: From Local Explanations to Model Understanding
ExSum: From Local Explanations to Model Understanding
Yilun Zhou
Marco Tulio Ribeiro
J. Shah
FAtt
LRM
19
25
0
30 Apr 2022
Visualizing Deep Neural Networks with Topographic Activation Maps
Visualizing Deep Neural Networks with Topographic Activation Maps
A. Krug
Raihan Kabir Ratul
Christopher Olson
Sebastian Stober
FAtt
AI4CE
36
3
0
07 Apr 2022
Evaluating saliency methods on artificial data with different background
  types
Evaluating saliency methods on artificial data with different background types
Céline Budding
Fabian Eitel
K. Ritter
Stefan Haufe
XAI
FAtt
MedIm
27
5
0
09 Dec 2021
From Heatmaps to Structural Explanations of Image Classifiers
From Heatmaps to Structural Explanations of Image Classifiers
Li Fuxin
Zhongang Qi
Saeed Khorram
Vivswan Shitole
Prasad Tadepalli
Minsuk Kahng
Alan Fern
XAI
FAtt
23
4
0
13 Sep 2021
A Comparison of Deep Saliency Map Generators on Multispectral Data in
  Object Detection
A Comparison of Deep Saliency Map Generators on Multispectral Data in Object Detection
Jens Bayer
David Munch
Michael Arens
3DPC
30
3
0
26 Aug 2021
Explaining COVID-19 and Thoracic Pathology Model Predictions by
  Identifying Informative Input Features
Explaining COVID-19 and Thoracic Pathology Model Predictions by Identifying Informative Input Features
Ashkan Khakzar
Yang Zhang
W. Mansour
Yuezhi Cai
Yawei Li
Yucheng Zhang
Seong Tae Kim
Nassir Navab
FAtt
49
17
0
01 Apr 2021
Combining Semantic Guidance and Deep Reinforcement Learning For
  Generating Human Level Paintings
Combining Semantic Guidance and Deep Reinforcement Learning For Generating Human Level Paintings
Jaskirat Singh
Liang Zheng
11
20
0
25 Nov 2020
Interpreting Graph Neural Networks for NLP With Differentiable Edge
  Masking
Interpreting Graph Neural Networks for NLP With Differentiable Edge Masking
M. Schlichtkrull
Nicola De Cao
Ivan Titov
AI4CE
33
214
0
01 Oct 2020
Survey of XAI in digital pathology
Survey of XAI in digital pathology
Milda Pocevičiūtė
Gabriel Eilertsen
Claes Lundström
14
56
0
14 Aug 2020
Beyond accuracy: quantifying trial-by-trial behaviour of CNNs and humans
  by measuring error consistency
Beyond accuracy: quantifying trial-by-trial behaviour of CNNs and humans by measuring error consistency
Robert Geirhos
Kristof Meding
Felix Wichmann
19
116
0
30 Jun 2020
Explainable Deep Learning: A Field Guide for the Uninitiated
Explainable Deep Learning: A Field Guide for the Uninitiated
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAML
XAI
38
370
0
30 Apr 2020
When Explanations Lie: Why Many Modified BP Attributions Fail
When Explanations Lie: Why Many Modified BP Attributions Fail
Leon Sixt
Maximilian Granz
Tim Landgraf
BDL
FAtt
XAI
13
132
0
20 Dec 2019
Semantics for Global and Local Interpretation of Deep Neural Networks
Semantics for Global and Local Interpretation of Deep Neural Networks
Jindong Gu
Volker Tresp
AI4CE
24
14
0
21 Oct 2019
Adversarial Robustness as a Prior for Learned Representations
Adversarial Robustness as a Prior for Learned Representations
Logan Engstrom
Andrew Ilyas
Shibani Santurkar
Dimitris Tsipras
Brandon Tran
A. Madry
OOD
AAML
21
63
0
03 Jun 2019
What Do Adversarially Robust Models Look At?
What Do Adversarially Robust Models Look At?
Takahiro Itazuri
Yoshihiro Fukuhara
Hirokatsu Kataoka
Shigeo Morishima
19
5
0
19 May 2019
Interpretable machine learning: definitions, methods, and applications
Interpretable machine learning: definitions, methods, and applications
W. James Murdoch
Chandan Singh
Karl Kumbier
R. Abbasi-Asl
Bin-Xia Yu
XAI
HAI
47
1,416
0
14 Jan 2019
Sanity Checks for Saliency Maps
Sanity Checks for Saliency Maps
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
FAtt
AAML
XAI
64
1,927
0
08 Oct 2018
Classifying and Segmenting Microscopy Images Using Convolutional
  Multiple Instance Learning
Classifying and Segmenting Microscopy Images Using Convolutional Multiple Instance Learning
Oren Z. Kraus
Lei Jimmy Ba
B. Frey
164
392
0
17 Nov 2015
1