ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.19577
  4. Cited By
Tell me why: Visual foundation models as self-explainable classifiers

Tell me why: Visual foundation models as self-explainable classifiers

26 February 2025
Hugues Turbé
Mina Bjelogrlic
G. Mengaldo
Christian Lovis
ArXivPDFHTML

Papers citing "Tell me why: Visual foundation models as self-explainable classifiers"

5 / 5 papers shown
Title
AM-RADIO: Agglomerative Vision Foundation Model -- Reduce All Domains
  Into One
AM-RADIO: Agglomerative Vision Foundation Model -- Reduce All Domains Into One
Michael Ranzinger
Greg Heinrich
Jan Kautz
Pavlo Molchanov
VLM
65
43
0
10 Dec 2023
ProtoPFormer: Concentrating on Prototypical Parts in Vision Transformers
  for Interpretable Image Recognition
ProtoPFormer: Concentrating on Prototypical Parts in Vision Transformers for Interpretable Image Recognition
Mengqi Xue
Qihan Huang
Haofei Zhang
Lechao Cheng
Mingli Song
Ming-hui Wu
Mingli Song
ViT
71
54
0
22 Aug 2022
Self-Supervised Visual Representation Learning with Semantic Grouping
Self-Supervised Visual Representation Learning with Semantic Grouping
Xin Wen
Bingchen Zhao
Anlin Zheng
Xinming Zhang
Xiaojuan Qi
SSL
149
71
0
30 May 2022
This Looks Like That... Does it? Shortcomings of Latent Space Prototype
  Interpretability in Deep Networks
This Looks Like That... Does it? Shortcomings of Latent Space Prototype Interpretability in Deep Networks
Adrian Hoffmann
Claudio Fanconi
Rahul Rade
Jonas Köhler
34
63
0
05 May 2021
DeepHoyer: Learning Sparser Neural Network with Differentiable
  Scale-Invariant Sparsity Measures
DeepHoyer: Learning Sparser Neural Network with Differentiable Scale-Invariant Sparsity Measures
Huanrui Yang
W. Wen
H. Li
28
97
0
27 Aug 2019
1