ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.10574
  4. Cited By
This Looks Like That: Deep Learning for Interpretable Image Recognition

This Looks Like That: Deep Learning for Interpretable Image Recognition

27 June 2018
Chaofan Chen
Oscar Li
Chaofan Tao
A. Barnett
Jonathan Su
Cynthia Rudin
ArXivPDFHTML

Papers citing "This Looks Like That: Deep Learning for Interpretable Image Recognition"

50 / 602 papers shown
Title
Decoupled Multimodal Prototypes for Visual Recognition with Missing Modalities
Decoupled Multimodal Prototypes for Visual Recognition with Missing Modalities
Jueqing Lu
Yuanyuan Qi
Xiaohao Yang
Shujie Zhou
Lan Du
29
0
0
13 May 2025
Implet: A Post-hoc Subsequence Explainer for Time Series Models
Implet: A Post-hoc Subsequence Explainer for Time Series Models
Fanyu Meng
Ziwen Kan
Shahbaz Rezaei
Z. Kong
Xin Chen
Xin Liu
AI4TS
24
0
0
13 May 2025
From Pixels to Perception: Interpretable Predictions via Instance-wise Grouped Feature Selection
From Pixels to Perception: Interpretable Predictions via Instance-wise Grouped Feature Selection
Moritz Vandenhirtz
Julia E. Vogt
38
0
0
09 May 2025
This part looks alike this: identifying important parts of explained instances and prototypes
This part looks alike this: identifying important parts of explained instances and prototypes
Jacek Karolczak
Jerzy Stefanowski
29
0
0
08 May 2025
PointExplainer: Towards Transparent Parkinson's Disease Diagnosis
PointExplainer: Towards Transparent Parkinson's Disease Diagnosis
Xuechao Wang
S. Nõmm
Junqing Huang
Kadri Medijainen
A. Toomela
Michael Ruzhansky
AAML
FAtt
26
0
0
04 May 2025
SCOPE-MRI: Bankart Lesion Detection as a Case Study in Data Curation and Deep Learning for Challenging Diagnoses
SCOPE-MRI: Bankart Lesion Detection as a Case Study in Data Curation and Deep Learning for Challenging Diagnoses
Sahil Sethi
Sai Reddy
Mansi Sakarvadia
Jordan Serotte
Darlington Nwaudo
Nicholas Maassen
Lewis Shi
41
0
0
29 Apr 2025
If Concept Bottlenecks are the Question, are Foundation Models the Answer?
If Concept Bottlenecks are the Question, are Foundation Models the Answer?
Nicola Debole
Pietro Barbiero
Francesco Giannini
Andrea Passerini
Stefano Teso
Emanuele Marconato
131
0
0
28 Apr 2025
MERA: Multimodal and Multiscale Self-Explanatory Model with Considerably Reduced Annotation for Lung Nodule Diagnosis
MERA: Multimodal and Multiscale Self-Explanatory Model with Considerably Reduced Annotation for Lung Nodule Diagnosis
Jiahao Lu
Chong Yin
Silvia Ingala
Kenny Erleben
M. Nielsen
S. Darkner
51
0
0
27 Apr 2025
Multi-Grained Compositional Visual Clue Learning for Image Intent Recognition
Multi-Grained Compositional Visual Clue Learning for Image Intent Recognition
Yin Tang
Jiankai Li
Hongyu Yang
Xuan Dong
Lifeng Fan
Weixin Li
32
0
0
25 Apr 2025
Interpretable Affordance Detection on 3D Point Clouds with Probabilistic Prototypes
Interpretable Affordance Detection on 3D Point Clouds with Probabilistic Prototypes
M. Li
Korbinian Franz Rudolf
Nils Blank
Rudolf Lioutikov
3DPC
34
0
0
25 Apr 2025
Avoiding Leakage Poisoning: Concept Interventions Under Distribution Shifts
Avoiding Leakage Poisoning: Concept Interventions Under Distribution Shifts
M. Zarlenga
Gabriele Dominici
Pietro Barbiero
Z. Shams
M. Jamnik
KELM
158
0
0
24 Apr 2025
PCBEAR: Pose Concept Bottleneck for Explainable Action Recognition
PCBEAR: Pose Concept Bottleneck for Explainable Action Recognition
Jongseo Lee
Wooil Lee
Gyeong-Moon Park
Seong Tae Kim
Jinwoo Choi
33
0
0
17 Apr 2025
Can Masked Autoencoders Also Listen to Birds?
Can Masked Autoencoders Also Listen to Birds?
Lukas Rauch
Ilyass Moummad
René Heinrich
Alexis Joly
Bernhard Sick
Christoph Scholz
29
0
0
17 Apr 2025
ProtoECGNet: Case-Based Interpretable Deep Learning for Multi-Label ECG Classification with Contrastive Learning
ProtoECGNet: Case-Based Interpretable Deep Learning for Multi-Label ECG Classification with Contrastive Learning
S.
David Chen
Thomas Statchen
Michael C. Burkhart
Nipun Bhandari
Bashar Ramadan
Brett Beaulieu-Jones
37
1
0
11 Apr 2025
On Background Bias of Post-Hoc Concept Embeddings in Computer Vision DNNs
On Background Bias of Post-Hoc Concept Embeddings in Computer Vision DNNs
Gesina Schwalbe
Georgii Mikriukov
Edgar Heinert
Stavros Gerolymatos
Mert Keser
Alois Knoll
Matthias Rottmann
Annika Mütze
31
0
0
11 Apr 2025
Language Guided Concept Bottleneck Models for Interpretable Continual Learning
Language Guided Concept Bottleneck Models for Interpretable Continual Learning
Lu Yu
Haoyu Han
Zhe Tao
Hantao Yao
Changsheng Xu
CLL
60
0
0
30 Mar 2025
Patronus: Bringing Transparency to Diffusion Models with Prototypes
Patronus: Bringing Transparency to Diffusion Models with Prototypes
Nina Weng
Aasa Feragen
Siavash Bigdeli
DiffM
41
0
0
28 Mar 2025
Towards Human-Understandable Multi-Dimensional Concept Discovery
Towards Human-Understandable Multi-Dimensional Concept Discovery
Arne Grobrugge
Niklas Kühl
G. Satzger
Philipp Spitzer
44
0
0
24 Mar 2025
Self-Explaining Neural Networks for Business Process Monitoring
Self-Explaining Neural Networks for Business Process Monitoring
Shahaf Bassan
Shlomit Gur
Sergey Zeltyn
Konstantinos Mavrogiorgos
Ron Eliav
Dimosthenis Kyriazis
49
0
0
23 Mar 2025
Interpretable Machine Learning for Oral Lesion Diagnosis through Prototypical Instances Identification
Interpretable Machine Learning for Oral Lesion Diagnosis through Prototypical Instances Identification
Alessio Cascione
Mattia Setzu
Federico A. Galatolo
M. G. Cimino
Riccardo Guidotti
34
0
0
21 Mar 2025
An interpretable approach to automating the assessment of biofouling in video footage
An interpretable approach to automating the assessment of biofouling in video footage
Evelyn J. Mannix
Bartholomew A. Woodham
58
0
0
17 Mar 2025
Escaping Plato's Cave: Robust Conceptual Reasoning through Interpretable 3D Neural Object Volumes
Escaping Plato's Cave: Robust Conceptual Reasoning through Interpretable 3D Neural Object Volumes
Nhi Pham
Bernt Schiele
Adam Kortylewski
Jonas Fischer
58
0
0
17 Mar 2025
ProtoDepth: Unsupervised Continual Depth Completion with Prototypes
ProtoDepth: Unsupervised Continual Depth Completion with Prototypes
Patrick Rim
Hyoungseob Park
Suchisrit Gangopadhyay
Ziyao Zeng
Younjoon Chung
Alex Wong
CLL
MDE
66
1
0
17 Mar 2025
Enhancing Job Salary Prediction with Disentangled Composition Effect Modeling: A Neural Prototyping Approach
Enhancing Job Salary Prediction with Disentangled Composition Effect Modeling: A Neural Prototyping Approach
Yang Ji
Ying Sun
Hengshu Zhu
46
0
0
17 Mar 2025
A Transformer and Prototype-based Interpretable Model for Contextual Sarcasm Detection
A Transformer and Prototype-based Interpretable Model for Contextual Sarcasm Detection
Ximing Wen
Rezvaneh Rezapour
41
0
0
14 Mar 2025
Interpretable Image Classification via Non-parametric Part Prototype Learning
Zhijie Zhu
Lei Fan
M. Pagnucco
Yang Song
41
0
0
13 Mar 2025
Birds look like cars: Adversarial analysis of intrinsically interpretable deep learning
Hubert Baniecki
P. Biecek
AAML
80
0
0
11 Mar 2025
Interactive Medical Image Analysis with Concept-based Similarity Reasoning
Ta Duc Huy
Sen Kim Tran
Phan Nguyen
Nguyen Hoang Tran
Tran Bao Sam
A. Hengel
Zhibin Liao
Johan W. Verjans
Minh Nguyen Nhat To
Vu Minh Hieu Phan
48
0
0
10 Mar 2025
Exploring Interpretability for Visual Prompt Tuning with Hierarchical Concepts
Yubin Wang
Xinyang Jiang
De Cheng
Xiangqian Zhao
Zilong Wang
Dongsheng Li
Cairong Zhao
VLM
72
0
0
08 Mar 2025
Causally Reliable Concept Bottleneck Models
Giovanni De Felice
Arianna Casanova Flores
Francesco De Santis
Silvia Santini
Johannes Schneider
Pietro Barbiero
Alberto Termine
74
2
0
06 Mar 2025
Exploring Neural Ordinary Differential Equations as Interpretable Healthcare classifiers
Shi Li
AI4CE
57
0
0
05 Mar 2025
Rashomon Sets for Prototypical-Part Networks: Editing Interpretable Models in Real-Time
J. Donnelly
Zhicheng Guo
A. Barnett
Hayden McTavish
Chaofan Chen
Cynthia Rudin
61
0
0
03 Mar 2025
QPM: Discrete Optimization for Globally Interpretable Image Classification
QPM: Discrete Optimization for Globally Interpretable Image Classification
Thomas Norrenbrock
T. Kaiser
Sovan Biswas
R. Manuvinakurike
Bodo Rosenhahn
55
0
0
27 Feb 2025
Tell me why: Visual foundation models as self-explainable classifiers
Tell me why: Visual foundation models as self-explainable classifiers
Hugues Turbé
Mina Bjelogrlic
G. Mengaldo
Christian Lovis
66
0
0
26 Feb 2025
BarkXAI: A Lightweight Post-Hoc Explainable Method for Tree Species Classification with Quantifiable Concepts
BarkXAI: A Lightweight Post-Hoc Explainable Method for Tree Species Classification with Quantifiable Concepts
Yunmei Huang
Songlin Hou
Zachary Nelson Horve
Songlin Fei
69
0
0
26 Feb 2025
Model-agnostic Coreset Selection via LLM-based Concept Bottlenecks
Akshay Mehra
Trisha Mittal
Subhadra Gopalakrishnan
Joshua Kimball
43
0
0
23 Feb 2025
Uncertainty-Aware Explanations Through Probabilistic Self-Explainable Neural Networks
Uncertainty-Aware Explanations Through Probabilistic Self-Explainable Neural Networks
Jon Vadillo
Roberto Santana
J. A. Lozano
Marta Z. Kwiatkowska
BDL
AAML
65
0
0
17 Feb 2025
Shortcuts and Identifiability in Concept-based Models from a Neuro-Symbolic Lens
Shortcuts and Identifiability in Concept-based Models from a Neuro-Symbolic Lens
Samuele Bortolotti
Emanuele Marconato
Paolo Morettin
Andrea Passerini
Stefano Teso
61
2
0
16 Feb 2025
This looks like what? Challenges and Future Research Directions for Part-Prototype Models
This looks like what? Challenges and Future Research Directions for Part-Prototype Models
Khawla Elhadri
Tomasz Michalski
Adam Wróbel
Jorg Schlotterer
Bartosz Zieliñski
C. Seifert
84
0
0
13 Feb 2025
Sparse Autoencoders for Scientifically Rigorous Interpretation of Vision Models
Sparse Autoencoders for Scientifically Rigorous Interpretation of Vision Models
Samuel Stevens
Wei-Lun Chao
T. Berger-Wolf
Yu-Chuan Su
VLM
72
2
0
10 Feb 2025
B-cosification: Transforming Deep Neural Networks to be Inherently Interpretable
B-cosification: Transforming Deep Neural Networks to be Inherently Interpretable
Shreyash Arya
Sukrut Rao
Moritz Bohle
Bernt Schiele
68
2
0
28 Jan 2025
COMIX: Compositional Explanations using Prototypes
COMIX: Compositional Explanations using Prototypes
S. Sivaprasad
D. Kangin
Plamen Angelov
Mario Fritz
139
0
0
10 Jan 2025
An Image-based Typology for Visualization
An Image-based Typology for Visualization
Jian Chen
Petra Isenberg
R. Laramee
Tobias Isenberg
Michael Sedlmair
Torsten Moeller
Rui Li
43
3
0
08 Jan 2025
Label-free Concept Based Multiple Instance Learning for Gigapixel Histopathology
Label-free Concept Based Multiple Instance Learning for Gigapixel Histopathology
Susu Sun
Leslie Tessier
Frédérique Meeuwsen
Clément Grisi
Dominique van Midden
G. Litjens
Christian F. Baumgartner
24
2
0
06 Jan 2025
Energy-Based Concept Bottleneck Models: Unifying Prediction, Concept Intervention, and Probabilistic Interpretations
Energy-Based Concept Bottleneck Models: Unifying Prediction, Concept Intervention, and Probabilistic Interpretations
Xin-Chao Xu
Yi Qin
Lu Mi
Hao Wang
X. Li
74
9
0
03 Jan 2025
Regulation of Language Models With Interpretability Will Likely Result
  In A Performance Trade-Off
Regulation of Language Models With Interpretability Will Likely Result In A Performance Trade-Off
Eoin M. Kenny
Julie A. Shah
66
0
0
12 Dec 2024
OMENN: One Matrix to Explain Neural Networks
OMENN: One Matrix to Explain Neural Networks
Adam Wróbel
Mikołaj Janusz
Bartosz Zieliñski
Dawid Rymarczyk
FAtt
AAML
75
0
0
03 Dec 2024
Explaining the Impact of Training on Vision Models via Activation Clustering
Explaining the Impact of Training on Vision Models via Activation Clustering
Ahcène Boubekki
Samuel G. Fadel
Sebastian Mair
89
0
0
29 Nov 2024
Explainable deep learning improves human mental models of self-driving
  cars
Explainable deep learning improves human mental models of self-driving cars
Eoin M. Kenny
Akshay Dharmavaram
Sang Uk Lee
Tung Phan-Minh
Shreyas Rajesh
Yunqing Hu
Laura Major
Momchil S. Tomov
Julie A. Shah
69
0
0
27 Nov 2024
GIFT: A Framework for Global Interpretable Faithful Textual Explanations of Vision Classifiers
GIFT: A Framework for Global Interpretable Faithful Textual Explanations of Vision Classifiers
Éloi Zablocki
Valentin Gerard
Amaia Cardiel
Eric Gaussier
Matthieu Cord
Eduardo Valle
79
0
0
23 Nov 2024
1234...111213
Next