Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1509.06321
Cited By
Evaluating the visualization of what a Deep Neural Network has learned
21 September 2015
Wojciech Samek
Alexander Binder
G. Montavon
Sebastian Lapuschkin
K. Müller
XAI
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Evaluating the visualization of what a Deep Neural Network has learned"
50 / 510 papers shown
Title
Evaluating Model Explanations without Ground Truth
Kaivalya Rawal
Zihao Fu
Eoin Delaney
Chris Russell
FAtt
XAI
31
0
0
15 May 2025
Explainable Artificial Intelligence Techniques for Software Development Lifecycle: A Phase-specific Survey
Lakshit Arora
Sanjay Surendranath Girija
Shashank Kapoor
Aman Raj
Dipen Pradhan
Ankit Shetgaonkar
31
0
0
11 May 2025
Wasserstein Distances Made Explainable: Insights into Dataset Shifts and Transport Phenomena
Philip Naumann
Jacob R. Kauffmann
G. Montavon
26
0
0
09 May 2025
What Makes for a Good Saliency Map? Comparing Strategies for Evaluating Saliency Maps in Explainable AI (XAI)
Felix Kares
Timo Speith
Hanwei Zhang
Markus Langer
FAtt
XAI
38
0
0
23 Apr 2025
Probabilistic Stability Guarantees for Feature Attributions
Helen Jin
Anton Xue
Weiqiu You
Surbhi Goel
Eric Wong
19
0
0
18 Apr 2025
Metric-Guided Synthesis of Class Activation Mapping
Alejandro Luque-Cerpa
Elizabeth Polgreen
Ajitha Rajan
Hazem Torfah
27
0
0
14 Apr 2025
Towards an Evaluation Framework for Explainable Artificial Intelligence Systems for Health and Well-being
Esperança Amengual-Alcover
Antoni Jaume-i-Capó
Miquel Miró-Nicolau
Gabriel Moyà Alcover
Antonia Paniza-Fullana
32
0
0
11 Apr 2025
Uncovering the Structure of Explanation Quality with Spectral Analysis
Johannes Maeß
G. Montavon
Shinichi Nakajima
Klaus-Robert Müller
Thomas Schnake
FAtt
38
0
0
11 Apr 2025
Explainable AI-Based Interface System for Weather Forecasting Model
Soyeon Kim
Junho Choi
Yeji Choi
Subeen Lee
Artyom Stitsyuk
Minkyoung Park
Seongyeop Jeong
Youhyun Baek
Jaesik Choi
XAI
48
2
0
01 Apr 2025
Unifying Perplexing Behaviors in Modified BP Attributions through Alignment Perspective
Guanhua Zheng
Jitao Sang
Changsheng Xu
AAML
FAtt
65
0
0
14 Mar 2025
Axiomatic Explainer Globalness via Optimal Transport
Davin Hill
Josh Bone
A. Masoomi
Max Torop
Jennifer Dy
100
1
0
13 Mar 2025
Grad-ECLIP: Gradient-based Visual and Textual Explanations for CLIP
Chenyang Zhao
Kun Wang
J. H. Hsiao
Antoni B. Chan
CLIP
68
0
0
26 Feb 2025
Class-Dependent Perturbation Effects in Evaluating Time Series Attributions
Gregor Baer
Isel Grau
Chao Zhang
Pieter Van Gorp
AAML
46
0
0
24 Feb 2025
Evaluate with the Inverse: Efficient Approximation of Latent Explanation Quality Distribution
Carlos Eiras-Franco
Anna Hedström
Marina M.-C. Höhne
XAI
33
0
0
24 Feb 2025
A Close Look at Decomposition-based XAI-Methods for Transformer Language Models
L. Arras
Bruno Puri
Patrick Kahardipraja
Sebastian Lapuschkin
Wojciech Samek
41
0
0
21 Feb 2025
Extending Information Bottleneck Attribution to Video Sequences
Veronika Solopova
Lucas Schmidt
Dorothea Kolossa
47
0
0
28 Jan 2025
xMIL: Insightful Explanations for Multiple Instance Learning in Histopathology
Julius Hense
M. J. Idaji
Oliver Eberle
Thomas Schnake
Jonas Dippel
Laure Ciernik
Oliver Buchstab
Andreas Mock
Frederick Klauschen
Klaus-Robert Müller
49
3
0
08 Jan 2025
Interpretable Recognition of Fused Magnesium Furnace Working Conditions with Deep Convolutional Stochastic Configuration Networks
Li Weitao
Zhang Xinru
Wang Dianhui
Tong Qianqian
Chai Tianyou
AI4CE
32
0
0
06 Jan 2025
Navigating the Maze of Explainable AI: A Systematic Approach to Evaluating Methods and Metrics
Lukas Klein
Carsten T. Lüth
U. Schlegel
Till J. Bungert
Mennatallah El-Assady
Paul F. Jäger
XAI
ELM
39
2
0
03 Jan 2025
Accurate Explanation Model for Image Classifiers using Class Association Embedding
Ruitao Xie
Jingbang Chen
Limai Jiang
Rui Xiao
Yi-Lun Pan
Yunpeng Cai
55
4
0
31 Dec 2024
A Tale of Two Imperatives: Privacy and Explainability
Supriya Manna
Niladri Sett
91
0
0
30 Dec 2024
Advancing Attribution-Based Neural Network Explainability through Relative Absolute Magnitude Layer-Wise Relevance Propagation and Multi-Component Evaluation
Davor Vukadin
Petar Afrić
Marin Šilić
Goran Delač
FAtt
93
2
0
12 Dec 2024
From Flexibility to Manipulation: The Slippery Slope of XAI Evaluation
Kristoffer Wickstrøm
Marina M.-C. Höhne
Anna Hedström
AAML
79
2
0
07 Dec 2024
NormXLogit: The Head-on-Top Never Lies
Sina Abbasi
Mohammad Reza Modarres
Mohammad Taher Pilehvar
64
0
0
25 Nov 2024
Transparent Neighborhood Approximation for Text Classifier Explanation
Yi Cai
Arthur Zimek
Eirini Ntoutsi
Gerhard Wunder
AAML
64
0
0
25 Nov 2024
Establishing and Evaluating Trustworthy AI: Overview and Research Challenges
Dominik Kowald
S. Scher
Viktoria Pammer-Schindler
Peter Müllner
Kerstin Waxnegger
...
Andreas Truegler
Eduardo E. Veas
Roman Kern
Tomislav Nad
Simone Kopeinik
34
3
0
15 Nov 2024
Explanations that reveal all through the definition of encoding
A. Puli
Nhi Nguyen
Rajesh Ranganath
FAtt
XAI
36
1
0
04 Nov 2024
Benchmarking XAI Explanations with Human-Aligned Evaluations
Rémi Kazmierczak
Steve Azzolin
Eloise Berthier
Anna Hedström
Patricia Delhomme
...
Goran Frehse
Massimiliano Mancini
Baptiste Caramiaux
Andrea Passerini
Gianni Franchi
23
0
0
04 Nov 2024
SPES: Spectrogram Perturbation for Explainable Speech-to-Text Generation
Dennis Fucci
Marco Gaido
Beatrice Savoldi
Matteo Negri
Mauro Cettolo
L. Bentivogli
54
1
0
03 Nov 2024
Beyond Label Attention: Transparency in Language Models for Automated Medical Coding via Dictionary Learning
John Wu
David Wu
Jimeng Sun
44
1
0
31 Oct 2024
ConLUX: Concept-Based Local Unified Explanations
Junhao Liu
Haonan Yu
Xin Zhang
FAtt
LRM
33
0
0
16 Oct 2024
Rethinking the Principle of Gradient Smooth Methods in Model Explanation
Linjiang Zhou
Chao Ma
Zepeng Wang
Xiaochuan Shi
FAtt
22
0
0
10 Oct 2024
F-Fidelity: A Robust Framework for Faithfulness Evaluation of Explainable AI
Xu Zheng
Farhad Shirani
Zhuomin Chen
Chaohao Lin
Wei Cheng
Wenbo Guo
Dongsheng Luo
AAML
28
0
0
03 Oct 2024
One Wave to Explain Them All: A Unifying Perspective on Post-hoc Explainability
Gabriel Kasmi
Amandine Brunetto
Thomas Fel
Jayneel Parekh
AAML
FAtt
25
0
0
02 Oct 2024
Faithfulness and the Notion of Adversarial Sensitivity in NLP Explanations
Supriya Manna
Niladri Sett
AAML
29
2
0
26 Sep 2024
Explaining word embeddings with perfect fidelity: Case study in research impact prediction
Lucie Dvorackova
Marcin P. Joachimiak
Michal Cerny
Adriana Kubecova
Vilem Sklenak
Tomas Kliegr
13
0
0
24 Sep 2024
Explainable AI needs formal notions of explanation correctness
Stefan Haufe
Rick Wilming
Benedict Clark
Rustam Zhumagambetov
Danny Panknin
Ahcène Boubekki
XAI
31
0
0
22 Sep 2024
MulCPred: Learning Multi-modal Concepts for Explainable Pedestrian Action Prediction
Yan Feng
Alexander Carballo
Keisuke Fujii
Robin Karlsson
Ming Ding
K. Takeda
26
0
0
14 Sep 2024
The Role of Explainable AI in Revolutionizing Human Health Monitoring: A Review
Abdullah Alharthi
Ahmed Alqurashi
Turki Alharbi
Mohammed Alammar
Nasser Aldosari
Houssem Bouchekara
Yusuf Shaaban
Mohammad Shoaib Shahriar
Abdulrahman Al Ayidh
34
0
0
11 Sep 2024
Entropy Loss: An Interpretability Amplifier of 3D Object Detection Network for Intelligent Driving
H. Yang
Shiyan Zhang
Zhuoyi Yang
Xinyu Zhang
Li Wang
Yifan Tang
Jilong Guo
J. Li
27
0
0
01 Sep 2024
Towards Symbolic XAI -- Explanation Through Human Understandable Logical Relationships Between Features
Thomas Schnake
Farnoush Rezaei Jafaria
Jonas Lederer
Ping Xiong
Shinichi Nakajima
Stefan Gugler
G. Montavon
Klaus-Robert Müller
38
3
0
30 Aug 2024
IBO: Inpainting-Based Occlusion to Enhance Explainable Artificial Intelligence Evaluation in Histopathology
Pardis Afshar
Sajjad Hashembeiki
Pouya Khani
Emad Fatemizadeh
M. Rohban
24
3
0
29 Aug 2024
Pruning By Explaining Revisited: Optimizing Attribution Methods to Prune CNNs and Transformers
Sayed Mohammad Vakilzadeh Hatefi
Maximilian Dreyer
Reduan Achtibat
Thomas Wiegand
Wojciech Samek
Sebastian Lapuschkin
ViT
32
1
0
22 Aug 2024
Improving Network Interpretability via Explanation Consistency Evaluation
Hefeng Wu
Hao Jiang
Keze Wang
Ziyi Tang
Xianghuan He
Liang Lin
FAtt
AAML
28
0
0
08 Aug 2024
Revisiting the robustness of post-hoc interpretability methods
Jiawen Wei
Hugues Turbé
G. Mengaldo
AAML
39
4
0
29 Jul 2024
On the Evaluation Consistency of Attribution-based Explanations
Jiarui Duan
Haoling Li
Haofei Zhang
Hao Jiang
Mengqi Xue
Li Sun
Mingli Song
Jie Song
XAI
46
0
0
28 Jul 2024
Practical Attribution Guidance for Rashomon Sets
Sichao Li
Amanda S. Barnard
Quanling Deng
38
5
0
26 Jul 2024
Benchmarking the Attribution Quality of Vision Models
Robin Hesse
Simone Schaub-Meyer
Stefan Roth
FAtt
29
3
0
16 Jul 2024
XEdgeAI: A Human-centered Industrial Inspection Framework with Data-centric Explainable Edge AI Approach
Truong Thanh Hung Nguyen
Phuc Truong Loc Nguyen
Hung Cao
24
2
0
16 Jul 2024
Layer-Wise Relevance Propagation with Conservation Property for ResNet
Seitaro Otsuki
T. Iida
Félix Doublet
Tsubasa Hirakawa
Takayoshi Yamashita
H. Fujiyoshi
Komei Sugiura
FAtt
46
4
0
12 Jul 2024
1
2
3
4
...
9
10
11
Next