ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1509.06321
  4. Cited By
Evaluating the visualization of what a Deep Neural Network has learned

Evaluating the visualization of what a Deep Neural Network has learned

21 September 2015
Wojciech Samek
Alexander Binder
G. Montavon
Sebastian Lapuschkin
K. Müller
    XAI
ArXivPDFHTML

Papers citing "Evaluating the visualization of what a Deep Neural Network has learned"

50 / 510 papers shown
Title
The Meta-Evaluation Problem in Explainable AI: Identifying Reliable
  Estimators with MetaQuantus
The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus
Anna Hedström
P. Bommer
Kristoffer K. Wickstrom
Wojciech Samek
Sebastian Lapuschkin
Marina M.-C. Höhne
29
21
0
14 Feb 2023
On The Coherence of Quantitative Evaluation of Visual Explanations
On The Coherence of Quantitative Evaluation of Visual Explanations
Benjamin Vandersmissen
José Oramas
XAI
FAtt
26
3
0
14 Feb 2023
Explaining text classifiers through progressive neighborhood
  approximation with realistic samples
Explaining text classifiers through progressive neighborhood approximation with realistic samples
Yi Cai
Arthur Zimek
Eirini Ntoutsi
Gerhard Wunder
AI4TS
22
0
0
11 Feb 2023
A novel approach to generate datasets with XAI ground truth to evaluate
  image models
A novel approach to generate datasets with XAI ground truth to evaluate image models
Miquel Miró-Nicolau
Antoni Jaume-i-Capó
Gabriel Moyà Alcover
14
4
0
11 Feb 2023
Understanding User Preferences in Explainable Artificial Intelligence: A
  Survey and a Mapping Function Proposal
Understanding User Preferences in Explainable Artificial Intelligence: A Survey and a Mapping Function Proposal
M. Hashemi
Ali Darejeh
Francisco Cruz
40
3
0
07 Feb 2023
Improving Interpretability via Explicit Word Interaction Graph Layer
Improving Interpretability via Explicit Word Interaction Graph Layer
Arshdeep Sekhon
Hanjie Chen
A. Shrivastava
Zhe Wang
Yangfeng Ji
Yanjun Qi
AI4CE
MILM
17
6
0
03 Feb 2023
A Survey of Explainable AI in Deep Visual Modeling: Methods and Metrics
A Survey of Explainable AI in Deep Visual Modeling: Methods and Metrics
Naveed Akhtar
XAI
VLM
25
7
0
31 Jan 2023
Multi-dimensional concept discovery (MCD): A unifying framework with
  completeness guarantees
Multi-dimensional concept discovery (MCD): A unifying framework with completeness guarantees
Johanna Vielhaben
Stefan Blücher
Nils Strodthoff
25
37
0
27 Jan 2023
HEAR4Health: A blueprint for making computer audition a staple of modern
  healthcare
HEAR4Health: A blueprint for making computer audition a staple of modern healthcare
Andreas Triantafyllopoulos
Alexander Kathan
Alice Baird
Lukas Christ
Alexander Gebhard
...
Shahin Amiriparian
K. D. Bartl-Pokorny
A. Batliner
Florian B. Pokorny
Björn W. Schuller
41
7
0
25 Jan 2023
Opti-CAM: Optimizing saliency maps for interpretability
Opti-CAM: Optimizing saliency maps for interpretability
Hanwei Zhang
Felipe Torres
R. Sicre
Yannis Avrithis
Stéphane Ayache
30
22
0
17 Jan 2023
Disentangled Explanations of Neural Network Predictions by Finding
  Relevant Subspaces
Disentangled Explanations of Neural Network Predictions by Finding Relevant Subspaces
Pattarawat Chormai
J. Herrmann
Klaus-Robert Muller
G. Montavon
FAtt
45
17
0
30 Dec 2022
Comparing the Decision-Making Mechanisms by Transformers and CNNs via
  Explanation Methods
Comparing the Decision-Making Mechanisms by Transformers and CNNs via Explanation Methods
Ming-Xiu Jiang
Saeed Khorram
Li Fuxin
FAtt
22
9
0
13 Dec 2022
Optimizing Explanations by Network Canonization and Hyperparameter
  Search
Optimizing Explanations by Network Canonization and Hyperparameter Search
Frederik Pahde
Galip Umit Yolcu
Alexander Binder
Wojciech Samek
Sebastian Lapuschkin
52
11
0
30 Nov 2022
Towards Improved Input Masking for Convolutional Neural Networks
Towards Improved Input Masking for Convolutional Neural Networks
S. Balasubramanian
S. Feizi
AAML
33
3
0
26 Nov 2022
Evaluating Feature Attribution Methods for Electrocardiogram
Evaluating Feature Attribution Methods for Electrocardiogram
J. Suh
Jimyeong Kim
Euna Jung
Wonjong Rhee
FAtt
17
2
0
23 Nov 2022
Shortcomings of Top-Down Randomization-Based Sanity Checks for
  Evaluations of Deep Neural Network Explanations
Shortcomings of Top-Down Randomization-Based Sanity Checks for Evaluations of Deep Neural Network Explanations
Alexander Binder
Leander Weber
Sebastian Lapuschkin
G. Montavon
Klaus-Robert Muller
Wojciech Samek
FAtt
AAML
11
22
0
22 Nov 2022
Explaining Image Classifiers with Multiscale Directional Image
  Representation
Explaining Image Classifiers with Multiscale Directional Image Representation
Stefan Kolek
Robert Windesheim
Héctor Andrade-Loarca
Gitta Kutyniok
Ron Levie
29
4
0
22 Nov 2022
A Survey on Explainable Reinforcement Learning: Concepts, Algorithms,
  Challenges
A Survey on Explainable Reinforcement Learning: Concepts, Algorithms, Challenges
Yunpeng Qing
Shunyu Liu
Jie Song
Huiqiong Wang
Mingli Song
XAI
27
1
0
12 Nov 2022
What Makes a Good Explanation?: A Harmonized View of Properties of
  Explanations
What Makes a Good Explanation?: A Harmonized View of Properties of Explanations
Zixi Chen
Varshini Subhash
Marton Havasi
Weiwei Pan
Finale Doshi-Velez
XAI
FAtt
33
18
0
10 Nov 2022
Towards Human-Centred Explainability Benchmarks For Text Classification
Towards Human-Centred Explainability Benchmarks For Text Classification
Viktor Schlegel
Erick Mendez Guzman
R. Batista-Navarro
18
5
0
10 Nov 2022
On the Robustness of Explanations of Deep Neural Network Models: A
  Survey
On the Robustness of Explanations of Deep Neural Network Models: A Survey
Amlan Jyoti
Karthik Balaji Ganesh
Manoj Gayala
Nandita Lakshmi Tunuguntla
Sandesh Kamath
V. Balasubramanian
XAI
FAtt
AAML
32
4
0
09 Nov 2022
Privacy Meets Explainability: A Comprehensive Impact Benchmark
Privacy Meets Explainability: A Comprehensive Impact Benchmark
S. Saifullah
Dominique Mercier
Adriano Lucieri
Andreas Dengel
Sheraz Ahmed
27
14
0
08 Nov 2022
New Definitions and Evaluations for Saliency Methods: Staying Intrinsic,
  Complete and Sound
New Definitions and Evaluations for Saliency Methods: Staying Intrinsic, Complete and Sound
Arushi Gupta
Nikunj Saunshi
Dingli Yu
Kaifeng Lyu
Sanjeev Arora
AAML
FAtt
XAI
23
5
0
05 Nov 2022
Extending Logic Explained Networks to Text Classification
Extending Logic Explained Networks to Text Classification
Rishabh Jain
Gabriele Ciravegna
Pietro Barbiero
Francesco Giannini
Davide Buffelli
Pietro Lio'
FAtt
XAI
17
10
0
04 Nov 2022
Analysis of a Deep Learning Model for 12-Lead ECG Classification Reveals
  Learned Features Similar to Diagnostic Criteria
Analysis of a Deep Learning Model for 12-Lead ECG Classification Reveals Learned Features Similar to Diagnostic Criteria
Theresa Bender
J. Beinecke
D. Krefting
Carolin Müller
Henning Dathe
T. Seidler
Nicolai Spicher
Anne-Christin Hauschild
FAtt
11
24
0
03 Nov 2022
Trustworthy Human Computation: A Survey
Trustworthy Human Computation: A Survey
H. Kashima
S. Oyama
Hiromi Arai
Junichiro Mori
22
0
0
22 Oct 2022
Toward the application of XAI methods in EEG-based systems
Toward the application of XAI methods in EEG-based systems
Andrea Apicella
Francesco Isgrò
A. Pollastro
R. Prevete
OOD
AI4TS
11
14
0
12 Oct 2022
Reflection of Thought: Inversely Eliciting Numerical Reasoning in
  Language Models via Solving Linear Systems
Reflection of Thought: Inversely Eliciting Numerical Reasoning in Language Models via Solving Linear Systems
Fan Zhou
Haoyu Dong
Qian Liu
Zhoujun Cheng
Shi Han
Dongmei Zhang
ReLM
LRM
31
5
0
11 Oct 2022
Self-explaining Hierarchical Model for Intraoperative Time Series
Self-explaining Hierarchical Model for Intraoperative Time Series
Dingwen Li
Bing Xue
C. King
Bradley A. Fritz
M. Avidan
Joanna Abraham
Chenyang Lu
AI4CE
14
3
0
10 Oct 2022
Quantitative Metrics for Evaluating Explanations of Video DeepFake
  Detectors
Quantitative Metrics for Evaluating Explanations of Video DeepFake Detectors
Federico Baldassarre
Quentin Debard
Gonzalo Fiz Pontiveros
Tri Kurniawan Wijaya
41
4
0
07 Oct 2022
Evaluation of importance estimators in deep learning classifiers for
  Computed Tomography
Evaluation of importance estimators in deep learning classifiers for Computed Tomography
L. Brocki
Wistan Marchadour
Jonas Maison
B. Badic
P. Papadimitroulas
M. Hatt
Franck Vermet
N. C. Chung
9
4
0
30 Sep 2022
Deep learning and multi-level featurization of graph representations of
  microstructural data
Deep learning and multi-level featurization of graph representations of microstructural data
Reese E. Jones
C. Safta
A. Frankel
AI4CE
40
4
0
29 Sep 2022
Towards Faithful Model Explanation in NLP: A Survey
Towards Faithful Model Explanation in NLP: A Survey
Qing Lyu
Marianna Apidianaki
Chris Callison-Burch
XAI
109
107
0
22 Sep 2022
Generating detailed saliency maps using model-agnostic methods
Generating detailed saliency maps using model-agnostic methods
Maciej Sakowicz
FAtt
13
0
0
04 Sep 2022
Automatic Infectious Disease Classification Analysis with Concept
  Discovery
Automatic Infectious Disease Classification Analysis with Concept Discovery
Elena Sizikova
Joshua Vendrow
Xu Cao
Rachel Grotheer
Jamie Haddock
...
Huy V. Vo
Chuntian Wang
Megan Coffee
Kathryn Leonard
Deanna Needell
27
4
0
28 Aug 2022
Discovering Transferable Forensic Features for CNN-generated Images
  Detection
Discovering Transferable Forensic Features for CNN-generated Images Detection
Keshigeyan Chandrasegaran
Ngoc-Trung Tran
Alexander Binder
Ngai-man Cheung
AAML
29
27
0
24 Aug 2022
Causality-Inspired Taxonomy for Explainable Artificial Intelligence
Causality-Inspired Taxonomy for Explainable Artificial Intelligence
Pedro C. Neto
Tiago B. Gonccalves
João Ribeiro Pinto
W. Silva
Ana F. Sequeira
Arun Ross
Jaime S. Cardoso
XAI
28
12
0
19 Aug 2022
Adding Context to Source Code Representations for Deep Learning
Adding Context to Source Code Representations for Deep Learning
Fuwei Tian
Christoph Treude
14
4
0
30 Jul 2022
Deep neural network heatmaps capture Alzheimer's disease patterns
  reported in a large meta-analysis of neuroimaging studies
Deep neural network heatmaps capture Alzheimer's disease patterns reported in a large meta-analysis of neuroimaging studies
Dingqian Wang
N. Honnorat
P. Fox
K. Ritter
Simon B. Eickhoff
S. Seshadri
Mohamad Habes
11
33
0
22 Jul 2022
Anomalous behaviour in loss-gradient based interpretability methods
Anomalous behaviour in loss-gradient based interpretability methods
Vinod Subramanian
Siddharth Gururani
Emmanouil Benetos
Mark Sandler
6
0
0
15 Jul 2022
Human-Centric Research for NLP: Towards a Definition and Guiding
  Questions
Human-Centric Research for NLP: Towards a Definition and Guiding Questions
Bhushan Kotnis
Kiril Gashteovski
J. Gastinger
G. Serra
Francesco Alesiani
T. Sztyler
Ammar Shaker
Na Gong
Carolin (Haas) Lawrence
Zhao Xu
23
9
0
10 Jul 2022
SInGE: Sparsity via Integrated Gradients Estimation of Neuron Relevance
SInGE: Sparsity via Integrated Gradients Estimation of Neuron Relevance
Edouard Yvinec
Arnaud Dapogny
Matthieu Cord
Kévin Bailly
40
9
0
08 Jul 2022
Fidelity of Ensemble Aggregation for Saliency Map Explanations using
  Bayesian Optimization Techniques
Fidelity of Ensemble Aggregation for Saliency Map Explanations using Bayesian Optimization Techniques
Yannik Mahlau
Christian Nolde
FAtt
35
0
0
04 Jul 2022
BAGEL: A Benchmark for Assessing Graph Neural Network Explanations
BAGEL: A Benchmark for Assessing Graph Neural Network Explanations
Mandeep Rathee
Thorben Funke
Avishek Anand
Megha Khosla
36
14
0
28 Jun 2022
The Manifold Hypothesis for Gradient-Based Explanations
The Manifold Hypothesis for Gradient-Based Explanations
Sebastian Bordt
Uddeshya Upadhyay
Zeynep Akata
U. V. Luxburg
FAtt
AAML
18
12
0
15 Jun 2022
Geometrically Guided Integrated Gradients
Geometrically Guided Integrated Gradients
Md. Mahfuzur Rahman
N. Lewis
Sergey Plis
FAtt
AAML
16
0
0
13 Jun 2022
Diffeomorphic Counterfactuals with Generative Models
Diffeomorphic Counterfactuals with Generative Models
Ann-Kathrin Dombrowski
Jan E. Gerken
Klaus-Robert Muller
Pan Kessel
DiffM
BDL
27
15
0
10 Jun 2022
DORA: Exploring Outlier Representations in Deep Neural Networks
DORA: Exploring Outlier Representations in Deep Neural Networks
Kirill Bykov
Mayukh Deb
Dennis Grinwald
Klaus-Robert Muller
Marina M.-C. Höhne
21
12
0
09 Jun 2022
Saliency Cards: A Framework to Characterize and Compare Saliency Methods
Saliency Cards: A Framework to Characterize and Compare Saliency Methods
Angie Boggust
Harini Suresh
Hendrik Strobelt
John Guttag
Arvindmani Satyanarayan
FAtt
XAI
30
8
0
07 Jun 2022
Dual Decomposition of Convex Optimization Layers for Consistent
  Attention in Medical Images
Dual Decomposition of Convex Optimization Layers for Consistent Attention in Medical Images
Tom Ron
M. Weiler-Sagie
Tamir Hazan
FAtt
MedIm
21
6
0
06 Jun 2022
Previous
12345...91011
Next