Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2308.08162
Cited By
Interpretability Benchmark for Evaluating Spatial Misalignment of Prototypical Parts Explanations
16 August 2023
Mikolaj Sacha
Bartosz Jura
Dawid Rymarczyk
Lukasz Struski
Jacek Tabor
Bartosz Zieliñski
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Interpretability Benchmark for Evaluating Spatial Misalignment of Prototypical Parts Explanations"
29 / 29 papers shown
Title
ProtoSeg: Interpretable Semantic Segmentation with Prototypical Parts
Mikolaj Sacha
Dawid Rymarczyk
Lukasz Struski
Jacek Tabor
Bartosz Zieliñski
VLM
61
29
0
28 Jan 2023
ProGReST: Prototypical Graph Regression Soft Trees for Molecular Property Prediction
Dawid Rymarczyk
D. Dobrowolski
Tomasz Danel
93
4
0
07 Oct 2022
This looks more like that: Enhancing Self-Explaining Models by Prototypical Relevance Propagation
Srishti Gautam
Marina M.-C. Höhne
Stine Hansen
Robert Jenssen
Michael C. Kampffmeyer
57
49
0
27 Aug 2021
This Looks Like That... Does it? Shortcomings of Latent Space Prototype Interpretability in Deep Networks
Adrian Hoffmann
Claudio Fanconi
Rahul Rade
Jonas Köhler
54
63
0
05 May 2021
Visual Saliency Transformer
Nian Liu
Ni Zhang
Kaiyuan Wan
Ling Shao
Junwei Han
ViT
297
360
0
25 Apr 2021
IAIA-BL: A Case-based Interpretable Deep Learning Model for Classification of Mass Lesions in Digital Mammography
A. Barnett
F. Schwartz
Chaofan Tao
Chaofan Chen
Yinhao Ren
J. Lo
Cynthia Rudin
75
140
0
23 Mar 2021
XProtoNet: Diagnosis in Chest Radiography with Global and Local Explanations
Eunji Kim
Siwon Kim
Minji Seo
Sungroh Yoon
ViT
FAtt
71
115
0
19 Mar 2021
Attribute Prototype Network for Zero-Shot Learning
Wenjia Xu
Yongqin Xian
Jiuniu Wang
Bernt Schiele
Zeynep Akata
55
293
0
19 Aug 2020
Interpretable Deep Models for Cardiac Resynchronisation Therapy Response Prediction
Esther Puyol-Antón
Chong Chen
J. Clough
B. Ruijsink
B. Sidhu
...
M. Elliott
Vishal S. Mehta
Daniel Rueckert
C. Rinaldi
A. King
54
32
0
24 Jun 2020
Counterfactual VQA: A Cause-Effect Look at Language Bias
Yulei Niu
Kaihua Tang
Hanwang Zhang
Zhiwu Lu
Xiansheng Hua
Ji-Rong Wen
CML
117
401
0
08 Jun 2020
There and Back Again: Revisiting Backpropagation Saliency Methods
Sylvestre-Alvise Rebuffi
Ruth C. Fong
Xu Ji
Andrea Vedaldi
FAtt
XAI
68
113
0
06 Apr 2020
Concept Whitening for Interpretable Image Recognition
Zhi Chen
Yijie Bei
Cynthia Rudin
FAtt
78
322
0
05 Feb 2020
Understanding Deep Networks via Extremal Perturbations and Smooth Masks
Ruth C. Fong
Mandela Patrick
Andrea Vedaldi
AAML
73
416
0
18 Oct 2019
Interpretable Image Recognition with Hierarchical Prototypes
Peter Hase
Chaofan Chen
Oscar Li
Cynthia Rudin
VLM
83
111
0
25 Jun 2019
Interpreting Adversarially Trained Convolutional Neural Networks
Tianyuan Zhang
Zhanxing Zhu
AAML
GAN
FAtt
102
161
0
23 May 2019
On the Connection Between Adversarial Robustness and Saliency Map Interpretability
Christian Etmann
Sebastian Lunz
Peter Maass
Carola-Bibiane Schönlieb
AAML
FAtt
58
162
0
10 May 2019
Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet
Wieland Brendel
Matthias Bethge
SSL
FAtt
96
561
0
20 Mar 2019
Looking for the Devil in the Details: Learning Trilinear Attention Sampling Network for Fine-grained Image Recognition
Heliang Zheng
Jianlong Fu
Zhengjun Zha
Jiebo Luo
92
384
0
14 Mar 2019
Taking a HINT: Leveraging Explanations to Make Vision and Language Models More Grounded
Ramprasaath R. Selvaraju
Stefan Lee
Yilin Shen
Hongxia Jin
Shalini Ghosh
Larry Heck
Dhruv Batra
Devi Parikh
FAtt
VLM
64
254
0
11 Feb 2019
Sanity Checks for Saliency Maps
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
FAtt
AAML
XAI
141
1,970
0
08 Oct 2018
This Looks Like That: Deep Learning for Interpretable Image Recognition
Chaofan Chen
Oscar Li
Chaofan Tao
A. Barnett
Jonathan Su
Cynthia Rudin
243
1,186
0
27 Jun 2018
Towards Robust Interpretability with Self-Explaining Neural Networks
David Alvarez-Melis
Tommi Jaakkola
MILM
XAI
126
946
0
20 Jun 2018
Robustness May Be at Odds with Accuracy
Dimitris Tsipras
Shibani Santurkar
Logan Engstrom
Alexander Turner
Aleksander Madry
AAML
104
1,783
0
30 May 2018
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
FAtt
219
1,842
0
30 Nov 2017
Deep Learning for Case-Based Reasoning through Prototypes: A Neural Network that Explains Its Predictions
Oscar Li
Hao Liu
Chaofan Chen
Cynthia Rudin
176
591
0
13 Oct 2017
Adversarial Attacks on Neural Network Policies
Sandy Huang
Nicolas Papernot
Ian Goodfellow
Yan Duan
Pieter Abbeel
MLAU
AAML
94
837
0
08 Feb 2017
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
1.2K
16,990
0
16 Feb 2016
Explaining and Harnessing Adversarial Examples
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
AAML
GAN
280
19,107
0
20 Dec 2014
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
Karen Simonyan
Andrea Vedaldi
Andrew Zisserman
FAtt
312
7,308
0
20 Dec 2013
1