Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2311.16120
Cited By
Sanity checks for patch visualisation in prototype-based image classification
25 October 2023
Romain Xu-Darme
Georges Quénot
Zakaria Chihani
M. Rousset
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Sanity checks for patch visualisation in prototype-based image classification"
18 / 18 papers shown
Title
Metrics for saliency map evaluation of deep learning explanation methods
T. Gomez
Thomas Fréour
Harold Mouchère
XAI
FAtt
112
44
0
31 Jan 2022
From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI
Meike Nauta
Jan Trienes
Shreyasi Pathak
Elisa Nguyen
Michelle Peters
Yasmin Schmitt
Jorg Schlotterer
M. V. Keulen
C. Seifert
ELM
XAI
149
418
0
20 Jan 2022
Deformable ProtoPNet: An Interpretable Image Classifier Using Deformable Prototypes
Jonathan Donnelly
A. Barnett
Chaofan Chen
3DH
123
129
0
29 Nov 2021
This looks more like that: Enhancing Self-Explaining Models by Prototypical Relevance Propagation
Srishti Gautam
Marina M.-C. Höhne
Stine Hansen
Robert Jenssen
Michael C. Kampffmeyer
65
49
0
27 Aug 2021
Proceedings of ICML 2021 Workshop on Theoretic Foundation, Criticism, and Application Trend of Explainable AI
Quanshi Zhang
Tian Han
Lixin Fan
Zhanxing Zhu
Hang Su
Ying Nian Wu
Jie Ren
Hao Zhang
AAML
45
4
0
16 Jul 2021
This Looks Like That... Does it? Shortcomings of Latent Space Prototype Interpretability in Deep Networks
Adrian Hoffmann
Claudio Fanconi
Rahul Rade
Jonas Köhler
59
63
0
05 May 2021
Benchmarking and Survey of Explanation Methods for Black Box Models
F. Bodria
F. Giannotti
Riccardo Guidotti
Francesca Naretto
D. Pedreschi
S. Rinzivillo
XAI
105
230
0
25 Feb 2021
Sanity Checks for Saliency Metrics
Richard J. Tomsett
Daniel Harborne
Supriyo Chakraborty
Prudhvi K. Gurram
Alun D. Preece
XAI
103
170
0
29 Nov 2019
Sanity Checks for Saliency Maps
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
FAtt
AAML
XAI
152
1,970
0
08 Oct 2018
This Looks Like That: Deep Learning for Interpretable Image Recognition
Chaofan Chen
Oscar Li
Chaofan Tao
A. Barnett
Jonathan Su
Cynthia Rudin
252
1,187
0
27 Jun 2018
Towards Robust Interpretability with Self-Explaining Neural Networks
David Alvarez-Melis
Tommi Jaakkola
MILM
XAI
130
948
0
20 Jun 2018
RISE: Randomized Input Sampling for Explanation of Black-box Models
Vitali Petsiuk
Abir Das
Kate Saenko
FAtt
188
1,176
0
19 Jun 2018
Learning Important Features Through Propagating Activation Differences
Avanti Shrikumar
Peyton Greenside
A. Kundaje
FAtt
203
3,884
0
10 Apr 2017
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OOD
FAtt
193
6,027
0
04 Mar 2017
Grad-CAM: Why did you say that?
Ramprasaath R. Selvaraju
Abhishek Das
Ramakrishna Vedantam
Michael Cogswell
Devi Parikh
Dhruv Batra
FAtt
90
476
0
22 Nov 2016
Not Just a Black Box: Learning Important Features Through Propagating Activation Differences
Avanti Shrikumar
Peyton Greenside
A. Shcherbina
A. Kundaje
FAtt
92
791
0
05 May 2016
Striving for Simplicity: The All Convolutional Net
Jost Tobias Springenberg
Alexey Dosovitskiy
Thomas Brox
Martin Riedmiller
FAtt
254
4,681
0
21 Dec 2014
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
Karen Simonyan
Andrea Vedaldi
Andrew Zisserman
FAtt
317
7,321
0
20 Dec 2013
1