ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2309.06891
  4. Cited By
Keep It SimPool: Who Said Supervised Transformers Suffer from Attention
  Deficit?

Keep It SimPool: Who Said Supervised Transformers Suffer from Attention Deficit?

13 September 2023
Bill Psomas
Ioannis Kakogeorgiou
Konstantinos Karantzalos
Yannis Avrithis
    ViT
ArXivPDFHTML

Papers citing "Keep It SimPool: Who Said Supervised Transformers Suffer from Attention Deficit?"

8 / 8 papers shown
Title
Register and CLS tokens yield a decoupling of local and global features in large ViTs
Register and CLS tokens yield a decoupling of local and global features in large ViTs
Alexander Lappe
M. Giese
24
0
0
09 May 2025
Beyond [cls]: Exploring the true potential of Masked Image Modeling representations
Beyond [cls]: Exploring the true potential of Masked Image Modeling representations
Marcin Przewiȩźlikowski
Randall Balestriero
Wojciech Jasiński
Marek 'Smieja
Bartosz Zieliñski
69
0
0
04 Dec 2024
Vision Transformers Need Registers
Vision Transformers Need Registers
Zilong Chen
Maxime Oquab
Julien Mairal
Huaping Liu
ViT
44
312
0
28 Sep 2023
Localizing Objects with Self-Supervised Transformers and no Labels
Localizing Objects with Self-Supervised Transformers and no Labels
Oriane Siméoni
Gilles Puy
Huy V. Vo
Simon Roburin
Spyros Gidaris
Andrei Bursuc
P. Pérez
Renaud Marlet
Jean Ponce
ViT
180
196
0
29 Sep 2021
Intriguing Properties of Vision Transformers
Intriguing Properties of Vision Transformers
Muzammal Naseer
Kanchana Ranasinghe
Salman Khan
Munawar Hayat
F. Khan
Ming-Hsuan Yang
ViT
256
621
0
21 May 2021
Emerging Properties in Self-Supervised Vision Transformers
Emerging Properties in Self-Supervised Vision Transformers
Mathilde Caron
Hugo Touvron
Ishan Misra
Hervé Jégou
Julien Mairal
Piotr Bojanowski
Armand Joulin
317
5,775
0
29 Apr 2021
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction
  without Convolutions
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions
Wenhai Wang
Enze Xie
Xiang Li
Deng-Ping Fan
Kaitao Song
Ding Liang
Tong Lu
Ping Luo
Ling Shao
ViT
277
3,622
0
24 Feb 2021
Learning and aggregating deep local descriptors for instance-level
  recognition
Learning and aggregating deep local descriptors for instance-level recognition
Giorgos Tolias
Tomás Jenícek
Ondvrej Chum
FedML
175
100
0
26 Jul 2020
1