Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2105.02968
Cited By
This Looks Like That... Does it? Shortcomings of Latent Space Prototype Interpretability in Deep Networks
5 May 2021
Adrian Hoffmann
Claudio Fanconi
Rahul Rade
Jonas Köhler
Re-assign community
ArXiv
PDF
HTML
Papers citing
"This Looks Like That... Does it? Shortcomings of Latent Space Prototype Interpretability in Deep Networks"
49 / 49 papers shown
Title
MERA: Multimodal and Multiscale Self-Explanatory Model with Considerably Reduced Annotation for Lung Nodule Diagnosis
Jiahao Lu
Chong Yin
Silvia Ingala
Kenny Erleben
M. Nielsen
S. Darkner
59
0
0
27 Apr 2025
On Background Bias of Post-Hoc Concept Embeddings in Computer Vision DNNs
Gesina Schwalbe
Georgii Mikriukov
Edgar Heinert
Stavros Gerolymatos
Mert Keser
Alois Knoll
Matthias Rottmann
Annika Mütze
69
0
0
11 Apr 2025
Birds look like cars: Adversarial analysis of intrinsically interpretable deep learning
Hubert Baniecki
P. Biecek
AAML
83
0
0
11 Mar 2025
Rashomon Sets for Prototypical-Part Networks: Editing Interpretable Models in Real-Time
J. Donnelly
Zhicheng Guo
A. Barnett
Hayden McTavish
Chaofan Chen
Cynthia Rudin
100
0
0
03 Mar 2025
QPM: Discrete Optimization for Globally Interpretable Image Classification
Thomas Norrenbrock
Timo Kaiser
Sovan Biswas
R. Manuvinakurike
Bodo Rosenhahn
94
0
0
27 Feb 2025
Tell me why: Visual foundation models as self-explainable classifiers
Hugues Turbé
Mina Bjelogrlic
G. Mengaldo
Christian Lovis
97
0
0
26 Feb 2025
A Robust Prototype-Based Network with Interpretable RBF Classifier Foundations
S. Saralajew
Ashish Rana
T. Villmann
Ammar Shaker
OOD
99
1
0
20 Dec 2024
Strategies and Challenges of Efficient White-Box Training for Human Activity Recognition
Daniel Geissler
Bo Zhou
P. Lukowicz
HAI
84
1
0
11 Dec 2024
Prototype-Based Methods in Explainable AI and Emerging Opportunities in the Geosciences
Anushka Narayanan
Karianne J. Bergen
60
1
0
22 Oct 2024
Fool Me Once? Contrasting Textual and Visual Explanations in a Clinical Decision-Support Setting
Maxime Kayser
Bayar I. Menzat
Cornelius Emde
Bogdan Bercean
Alex Novak
Abdala Espinosa
B. Papież
Susanne Gaube
Thomas Lukasiewicz
Oana-Maria Camburu
63
3
0
16 Oct 2024
CaBRNet, an open-source library for developing and evaluating Case-Based Reasoning Models
Romain Xu-Darme
Aymeric Varasse
Alban Grastien
Julien Girard
Zakaria Chihani
35
0
0
25 Sep 2024
Concept-Based Explanations in Computer Vision: Where Are We and Where Could We Go?
Jae Hee Lee
Georgii Mikriukov
Gesina Schwalbe
Stefan Wermter
D. Wolter
55
2
0
20 Sep 2024
Semantic Prototypes: Enhancing Transparency Without Black Boxes
Orfeas Menis Mastromichalakis
Giorgos Filandrianos
Jason Liartis
Edmund Dervakos
Giorgos Stamou
56
2
0
18 Jul 2024
This Probably Looks Exactly Like That: An Invertible Prototypical Network
Zachariah Carmichael
Timothy Redgrave
Daniel Gonzalez Cedre
Walter J. Scheirer
BDL
62
2
0
16 Jul 2024
ProtoS-ViT: Visual foundation models for sparse self-explainable classifications
Hugues Turbé
Mina Bjelogrlic
G. Mengaldo
Christian Lovis
ViT
44
6
0
14 Jun 2024
Enhancing Interpretability of Vertebrae Fracture Grading using Human-interpretable Prototypes
Poulami Sinhamahapatra
Suprosanna Shit
Anjany Sekuboyina
M. Husseini
D. Schinz
Nicolas Lenhart
Bjoern Menze
Jan Kirschke
Karsten Roscher
Stephan Guennemann
76
1
0
03 Apr 2024
On the Concept Trustworthiness in Concept Bottleneck Models
Qihan Huang
Mingli Song
Jingwen Hu
Haofei Zhang
Yong Wang
Mingli Song
55
9
0
21 Mar 2024
ComFe: An Interpretable Head for Vision Transformers
Evelyn J. Mannix
H. Bondell
Howard Bondell
VLM
ViT
51
1
0
07 Mar 2024
Vision Transformers with Natural Language Semantics
Young-Kyung Kim
Matías Di Martino
Guillermo Sapiro
ViT
28
5
0
27 Feb 2024
Q-SENN: Quantized Self-Explaining Neural Networks
Thomas Norrenbrock
Marco Rudolph
Bodo Rosenhahn
FAtt
AAML
MILM
62
6
0
21 Dec 2023
ProtoArgNet: Interpretable Image Classification with Super-Prototypes and Argumentation [Technical Report]
Hamed Ayoobi
Nico Potyka
Francesca Toni
51
3
0
26 Nov 2023
Robust Text Classification: Analyzing Prototype-Based Networks
Zhivar Sourati
D. Deshpande
Filip Ilievski
Kiril Gashteovski
S. Saralajew
OOD
OffRL
47
2
0
11 Nov 2023
This Reads Like That: Deep Learning for Interpretable Natural Language Processing
Claudio Fanconi
Moritz Vandenhirtz
Severin Husmann
Julia E. Vogt
FAtt
19
2
0
25 Oct 2023
Sanity checks for patch visualisation in prototype-based image classification
Romain Xu-Darme
Georges Quénot
Zakaria Chihani
M. Rousset
24
6
0
25 Oct 2023
On the Interpretability of Part-Prototype Based Classifiers: A Human Centric Analysis
Omid Davoodi
Shayan Mohammadizadehsamakosh
Majid Komeili
40
11
0
10 Oct 2023
PRIME: Prioritizing Interpretability in Failure Mode Extraction
Keivan Rezaei
Mehrdad Saberi
Mazda Moayeri
Soheil Feizi
38
8
0
29 Sep 2023
Pixel-Grounded Prototypical Part Networks
Zachariah Carmichael
Suhas Lohit
A. Cherian
Michael Jeffrey Jones
Walter J. Scheirer
70
11
0
25 Sep 2023
Interpretability is in the Mind of the Beholder: A Causal Framework for Human-interpretable Representation Learning
Emanuele Marconato
Andrea Passerini
Stefano Teso
41
13
0
14 Sep 2023
Interpretability Benchmark for Evaluating Spatial Misalignment of Prototypical Parts Explanations
Mikolaj Sacha
Bartosz Jura
Dawid Rymarczyk
Lukasz Struski
Jacek Tabor
Bartosz Zieliñski
37
14
0
16 Aug 2023
The Co-12 Recipe for Evaluating Interpretable Part-Prototype Image Classifiers
Meike Nauta
Christin Seifert
65
11
0
26 Jul 2023
Take 5: Interpretable Image Classification with a Handful of Features
Thomas Norrenbrock
Marco Rudolph
Bodo Rosenhahn
FAtt
52
7
0
23 Mar 2023
ICICLE: Interpretable Class Incremental Continual Learning
Dawid Rymarczyk
Joost van de Weijer
Bartosz Zieliñski
Bartlomiej Twardowski
CLL
39
28
0
14 Mar 2023
Schema Inference for Interpretable Image Classification
Haofei Zhang
Mengqi Xue
Xiaokang Liu
Kaixuan Chen
Mingli Song
Mingli Song
OCL
52
3
0
12 Mar 2023
Sanity checks and improvements for patch visualisation in prototype-based image classification
Romain Xu-Darme
Georges Quénot
Zakaria Chihani
M. Rousset
37
3
0
20 Jan 2023
Evaluation and Improvement of Interpretability for Self-Explainable Part-Prototype Networks
Qihan Huang
Mengqi Xue
Wenqi Huang
Haofei Zhang
Mingli Song
Yongcheng Jing
Mingli Song
AAML
30
27
0
12 Dec 2022
"Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction
Sunnie S. Y. Kim
E. A. Watkins
Olga Russakovsky
Ruth C. Fong
Andrés Monroy-Hernández
47
109
0
02 Oct 2022
Leveraging Explanations in Interactive Machine Learning: An Overview
Stefano Teso
Öznur Alkan
Wolfgang Stammer
Elizabeth M. Daly
XAI
FAtt
LRM
54
62
0
29 Jul 2022
Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks
Tilman Raukur
A. Ho
Stephen Casper
Dylan Hadfield-Menell
AAML
AI4CE
33
126
0
27 Jul 2022
Overlooked factors in concept-based explanations: Dataset choice, concept learnability, and human capability
V. V. Ramaswamy
Sunnie S. Y. Kim
Ruth C. Fong
Olga Russakovsky
FAtt
37
29
0
20 Jul 2022
Concept-level Debugging of Part-Prototype Networks
A. Bontempelli
Stefano Teso
Katya Tentori
Fausto Giunchiglia
Andrea Passerini
41
52
0
31 May 2022
GlanceNets: Interpretabile, Leak-proof Concept-based Models
Emanuele Marconato
Andrea Passerini
Stefano Teso
113
64
0
31 May 2022
Explainable Deep Learning Methods in Medical Image Classification: A Survey
Cristiano Patrício
João C. Neves
Luís F. Teixeira
XAI
29
54
0
10 May 2022
Complexity from Adaptive-Symmetries Breaking: Global Minima in the Statistical Mechanics of Deep Neural Networks
Shaun Li
AI4CE
51
0
0
03 Jan 2022
HIVE: Evaluating the Human Interpretability of Visual Explanations
Sunnie S. Y. Kim
Nicole Meister
V. V. Ramaswamy
Ruth C. Fong
Olga Russakovsky
68
114
0
06 Dec 2021
Interpretable Image Classification with Differentiable Prototypes Assignment
Dawid Rymarczyk
Lukasz Struski
Michal Górszczak
K. Lewandowska
Jacek Tabor
Bartosz Zieliñski
41
98
0
06 Dec 2021
Transparency of Deep Neural Networks for Medical Image Analysis: A Review of Interpretability Methods
Zohaib Salahuddin
Henry C. Woodruff
A. Chatterjee
Philippe Lambin
31
308
0
01 Nov 2021
Toward a Unified Framework for Debugging Concept-based Models
A. Bontempelli
Fausto Giunchiglia
Andrea Passerini
Stefano Teso
22
4
0
23 Sep 2021
FUTURE-AI: Guiding Principles and Consensus Recommendations for Trustworthy Artificial Intelligence in Medical Imaging
Karim Lekadira
Richard Osuala
C. Gallin
Noussair Lazrak
Kaisar Kushibar
...
Nickolas Papanikolaou
Zohaib Salahuddin
Henry C. Woodruff
Philippe Lambin
L. Martí-Bonmatí
AI4TS
90
59
0
20 Sep 2021
ProtoMIL: Multiple Instance Learning with Prototypical Parts for Whole-Slide Image Classification
Dawid Rymarczyk
Adam Pardyl
Jaroslaw Kraus
Aneta Kaczyńska
M. Skomorowski
Bartosz Zieliñski
17
22
0
24 Aug 2021
1