ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2111.15000
  4. Cited By
Deformable ProtoPNet: An Interpretable Image Classifier Using Deformable
  Prototypes

Deformable ProtoPNet: An Interpretable Image Classifier Using Deformable Prototypes

29 November 2021
Jonathan Donnelly
A. Barnett
Chaofan Chen
    3DH
ArXivPDFHTML

Papers citing "Deformable ProtoPNet: An Interpretable Image Classifier Using Deformable Prototypes"

37 / 87 papers shown
Title
Advancing Post Hoc Case Based Explanation with Feature Highlighting
Advancing Post Hoc Case Based Explanation with Feature Highlighting
Eoin M. Kenny
Eoin Delaney
Markt. Keane
31
5
0
06 Nov 2023
This Looks Like Those: Illuminating Prototypical Concepts Using Multiple
  Visualizations
This Looks Like Those: Illuminating Prototypical Concepts Using Multiple Visualizations
Chiyu Ma
Brandon Zhao
Chaofan Chen
Cynthia Rudin
26
26
0
28 Oct 2023
Human-Guided Complexity-Controlled Abstractions
Human-Guided Complexity-Controlled Abstractions
Andi Peng
Mycal Tucker
Eoin M. Kenny
Noga Zaslavsky
Pulkit Agrawal
Julie A. Shah
25
1
0
26 Oct 2023
Sanity checks for patch visualisation in prototype-based image
  classification
Sanity checks for patch visualisation in prototype-based image classification
Romain Xu-Darme
Georges Quénot
Zakaria Chihani
M. Rousset
19
6
0
25 Oct 2023
Transitivity Recovering Decompositions: Interpretable and Robust
  Fine-Grained Relationships
Transitivity Recovering Decompositions: Interpretable and Robust Fine-Grained Relationships
Abhra Chaudhuri
Massimiliano Mancini
Zeynep Akata
Anjan Dutta
29
2
0
24 Oct 2023
On the Interpretability of Part-Prototype Based Classifiers: A Human
  Centric Analysis
On the Interpretability of Part-Prototype Based Classifiers: A Human Centric Analysis
Omid Davoodi
Shayan Mohammadizadehsamakosh
Majid Komeili
10
11
0
10 Oct 2023
ProtoExplorer: Interpretable Forensic Analysis of Deepfake Videos using
  Prototype Exploration and Refinement
ProtoExplorer: Interpretable Forensic Analysis of Deepfake Videos using Prototype Exploration and Refinement
M. D. L. D. Bouter
J. Pardo
Z. Geradts
M. Worring
13
10
0
20 Sep 2023
Text-to-Image Models for Counterfactual Explanations: a Black-Box
  Approach
Text-to-Image Models for Counterfactual Explanations: a Black-Box Approach
Guillaume Jeanneret
Loïc Simon
Frédéric Jurie
DiffM
30
12
0
14 Sep 2023
PCNN: Probable-Class Nearest-Neighbor Explanations Improve Fine-Grained
  Image Classification Accuracy for AIs and Humans
PCNN: Probable-Class Nearest-Neighbor Explanations Improve Fine-Grained Image Classification Accuracy for AIs and Humans
Giang Nguyen
Valerie Chen
Mohammad Reza Taesiri
Anh Totti Nguyen
32
4
0
25 Aug 2023
Improving Prototypical Visual Explanations with Reward Reweighing,
  Reselection, and Retraining
Improving Prototypical Visual Explanations with Reward Reweighing, Reselection, and Retraining
Aaron J. Li
Robin Netzorg
Zhihan Cheng
Zhuoqin Zhang
Bin Yu
22
3
0
08 Jul 2023
A differentiable Gaussian Prototype Layer for explainable Segmentation
A differentiable Gaussian Prototype Layer for explainable Segmentation
M. Gerstenberger
Steffen Maass
Peter Eisert
S. Bosse
30
4
0
25 Jun 2023
Concept-Centric Transformers: Enhancing Model Interpretability through
  Object-Centric Concept Learning within a Shared Global Workspace
Concept-Centric Transformers: Enhancing Model Interpretability through Object-Centric Concept Learning within a Shared Global Workspace
Jinyung Hong
Keun Hee Park
Theodore P. Pavlic
29
5
0
25 May 2023
Towards credible visual model interpretation with path attribution
Towards credible visual model interpretation with path attribution
Naveed Akhtar
Muhammad A. A. K. Jalwana
FAtt
22
5
0
23 May 2023
FICNN: A Framework for the Interpretation of Deep Convolutional Neural
  Networks
FICNN: A Framework for the Interpretation of Deep Convolutional Neural Networks
Hamed Behzadi-Khormouji
José Oramas
16
0
0
17 May 2023
MProtoNet: A Case-Based Interpretable Model for Brain Tumor
  Classification with 3D Multi-parametric Magnetic Resonance Imaging
MProtoNet: A Case-Based Interpretable Model for Brain Tumor Classification with 3D Multi-parametric Magnetic Resonance Imaging
Yuanyuan Wei
Roger Tam
Xiaoying Tang
MedIm
16
12
0
13 Apr 2023
ImageNet-Hard: The Hardest Images Remaining from a Study of the Power of
  Zoom and Spatial Biases in Image Classification
ImageNet-Hard: The Hardest Images Remaining from a Study of the Power of Zoom and Spatial Biases in Image Classification
Mohammad Reza Taesiri
Giang Nguyen
Sarra Habchi
C. Bezemer
Anh Totti Nguyen
VLM
32
20
0
11 Apr 2023
ICICLE: Interpretable Class Incremental Continual Learning
ICICLE: Interpretable Class Incremental Continual Learning
Dawid Rymarczyk
Joost van de Weijer
Bartosz Zieliñski
Bartlomiej Twardowski
CLL
32
28
0
14 Mar 2023
Don't PANIC: Prototypical Additive Neural Network for Interpretable
  Classification of Alzheimer's Disease
Don't PANIC: Prototypical Additive Neural Network for Interpretable Classification of Alzheimer's Disease
Tom Nuno Wolf
Sebastian Polsterl
Christian Wachinger
FAtt
24
6
0
13 Mar 2023
Visualizing Transferred Knowledge: An Interpretive Model of Unsupervised
  Domain Adaptation
Visualizing Transferred Knowledge: An Interpretive Model of Unsupervised Domain Adaptation
Wenxi Xiao
Zhengming Ding
Hongfu Liu
FAtt
11
2
0
04 Mar 2023
Variational Information Pursuit for Interpretable Predictions
Variational Information Pursuit for Interpretable Predictions
Aditya Chattopadhyay
Kwan Ho Ryan Chan
B. Haeffele
D. Geman
René Vidal
DRL
21
10
0
06 Feb 2023
Agnostic Visual Recommendation Systems: Open Challenges and Future
  Directions
Agnostic Visual Recommendation Systems: Open Challenges and Future Directions
L. Podo
Bardh Prenkaj
Paola Velardi
27
5
0
01 Feb 2023
ProtoSeg: Interpretable Semantic Segmentation with Prototypical Parts
ProtoSeg: Interpretable Semantic Segmentation with Prototypical Parts
Mikolaj Sacha
Dawid Rymarczyk
Lukasz Struski
Jacek Tabor
Bartosz Zieliñski
VLM
32
29
0
28 Jan 2023
Sanity checks and improvements for patch visualisation in
  prototype-based image classification
Sanity checks and improvements for patch visualisation in prototype-based image classification
Romain Xu-Darme
Georges Quénot
Zakaria Chihani
M. Rousset
10
3
0
20 Jan 2023
Learning Support and Trivial Prototypes for Interpretable Image
  Classification
Learning Support and Trivial Prototypes for Interpretable Image Classification
Chong Wang
Yuyuan Liu
Yuanhong Chen
Fengbei Liu
Yu Tian
Davis J. McCarthy
Helen Frazer
G. Carneiro
34
24
0
08 Jan 2023
Hierarchical Explanations for Video Action Recognition
Hierarchical Explanations for Video Action Recognition
Sadaf Gulshad
Teng Long
Nanne van Noord
FAtt
23
6
0
01 Jan 2023
Evaluation and Improvement of Interpretability for Self-Explainable
  Part-Prototype Networks
Evaluation and Improvement of Interpretability for Self-Explainable Part-Prototype Networks
Qihan Huang
Mengqi Xue
Wenqi Huang
Haofei Zhang
Jie Song
Yongcheng Jing
Mingli Song
AAML
24
26
0
12 Dec 2022
"Help Me Help the AI": Understanding How Explainability Can Support
  Human-AI Interaction
"Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction
Sunnie S. Y. Kim
E. A. Watkins
Olga Russakovsky
Ruth C. Fong
Andrés Monroy-Hernández
38
107
0
02 Oct 2022
ProtoPFormer: Concentrating on Prototypical Parts in Vision Transformers
  for Interpretable Image Recognition
ProtoPFormer: Concentrating on Prototypical Parts in Vision Transformers for Interpretable Image Recognition
Mengqi Xue
Qihan Huang
Haofei Zhang
Lechao Cheng
Jie Song
Ming-hui Wu
Mingli Song
ViT
25
53
0
22 Aug 2022
Causality-Inspired Taxonomy for Explainable Artificial Intelligence
Causality-Inspired Taxonomy for Explainable Artificial Intelligence
Pedro C. Neto
Tiago B. Gonccalves
João Ribeiro Pinto
W. Silva
Ana F. Sequeira
Arun Ross
Jaime S. Cardoso
XAI
33
12
0
19 Aug 2022
Visual correspondence-based explanations improve AI robustness and
  human-AI team accuracy
Visual correspondence-based explanations improve AI robustness and human-AI team accuracy
Giang Nguyen
Mohammad Reza Taesiri
Anh Totti Nguyen
30
42
0
26 Jul 2022
Learnable Visual Words for Interpretable Image Recognition
Learnable Visual Words for Interpretable Image Recognition
Wenxi Xiao
Zhengming Ding
Hongfu Liu
VLM
25
2
0
22 May 2022
Explainable Deep Learning Methods in Medical Image Classification: A
  Survey
Explainable Deep Learning Methods in Medical Image Classification: A Survey
Cristiano Patrício
João C. Neves
Luís F. Teixeira
XAI
24
52
0
10 May 2022
But that's not why: Inference adjustment by interactive prototype
  revision
But that's not why: Inference adjustment by interactive prototype revision
M. Gerstenberger
Thomas Wiegand
Peter Eisert
Sebastian Bosse
19
1
0
18 Mar 2022
A Cognitive Explainer for Fetal ultrasound images classifier Based on
  Medical Concepts
A Cognitive Explainer for Fetal ultrasound images classifier Based on Medical Concepts
Ying-Shuai Wanga
Yunxia Liua
Licong Dongc
Xuzhou Wua
Huabin Zhangb
Qiongyu Yed
Desheng Sunc
Xiaobo Zhoue
Kehong Yuan
27
0
0
19 Jan 2022
HIVE: Evaluating the Human Interpretability of Visual Explanations
HIVE: Evaluating the Human Interpretability of Visual Explanations
Sunnie S. Y. Kim
Nicole Meister
V. V. Ramaswamy
Ruth C. Fong
Olga Russakovsky
66
114
0
06 Dec 2021
Interpretable Mammographic Image Classification using Case-Based
  Reasoning and Deep Learning
Interpretable Mammographic Image Classification using Case-Based Reasoning and Deep Learning
A. Barnett
F. Schwartz
Chaofan Tao
Chaofan Chen
Yinhao Ren
J. Lo
Cynthia Rudin
62
21
0
12 Jul 2021
Benchmarking and Survey of Explanation Methods for Black Box Models
Benchmarking and Survey of Explanation Methods for Black Box Models
F. Bodria
F. Giannotti
Riccardo Guidotti
Francesca Naretto
D. Pedreschi
S. Rinzivillo
XAI
33
220
0
25 Feb 2021
Previous
12