ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2201.10728
  4. Cited By
Training Vision Transformers with Only 2040 Images

Training Vision Transformers with Only 2040 Images

26 January 2022
Yunhao Cao
Hao Yu
Jianxin Wu
    ViT
ArXivPDFHTML

Papers citing "Training Vision Transformers with Only 2040 Images"

14 / 14 papers shown
Title
Scaling Backwards: Minimal Synthetic Pre-training?
Scaling Backwards: Minimal Synthetic Pre-training?
Ryo Nakamura
Ryu Tadokoro
Ryosuke Yamada
Tim Puhlfürß
Iro Laina
Christian Rupprecht
Walid Maalej
Rio Yokota
Hirokatsu Kataoka
DD
19
2
0
01 Aug 2024
uaMix-MAE: Efficient Tuning of Pretrained Audio Transformers with
  Unsupervised Audio Mixtures
uaMix-MAE: Efficient Tuning of Pretrained Audio Transformers with Unsupervised Audio Mixtures
Afrina Tabassum
Dung N. Tran
Trung D. Q. Dang
Ismini Lourentzou
K. Koishida
47
0
0
14 Mar 2024
Are Vision Transformers More Data Hungry Than Newborn Visual Systems?
Are Vision Transformers More Data Hungry Than Newborn Visual Systems?
Lalit Pandey
Samantha M. W. Wood
Justin N. Wood
35
12
0
05 Dec 2023
Zero-TPrune: Zero-Shot Token Pruning through Leveraging of the Attention
  Graph in Pre-Trained Transformers
Zero-TPrune: Zero-Shot Token Pruning through Leveraging of the Attention Graph in Pre-Trained Transformers
Hongjie Wang
Bhishma Dedhia
N. Jha
ViT
VLM
38
26
0
27 May 2023
Mimetic Initialization of Self-Attention Layers
Mimetic Initialization of Self-Attention Layers
Asher Trockman
J. Zico Kolter
28
30
0
16 May 2023
Spikeformer: A Novel Architecture for Training High-Performance
  Low-Latency Spiking Neural Network
Spikeformer: A Novel Architecture for Training High-Performance Low-Latency Spiking Neural Network
Yudong Li
Yunlin Lei
Xu Yang
21
26
0
19 Nov 2022
Augraphy: A Data Augmentation Library for Document Images
Augraphy: A Data Augmentation Library for Document Images
Alexander Groleau
Kok Wei Chee
Stefan Larson
Samay Maini
Jonathan Boarman
19
10
0
30 Aug 2022
GMML is All you Need
GMML is All you Need
Sara Atito
Muhammad Awais
J. Kittler
ViT
VLM
46
18
0
30 May 2022
Towards Data-Efficient Detection Transformers
Towards Data-Efficient Detection Transformers
Wen Wang
Jing Zhang
Yang Cao
Yongliang Shen
Dacheng Tao
ViT
20
59
0
17 Mar 2022
LibFewShot: A Comprehensive Library for Few-shot Learning
LibFewShot: A Comprehensive Library for Few-shot Learning
Wenbin Li
Ziyi
Ziyi Wang
Xuesong Yang
C. Dong
...
Jing Huo
Yinghuan Shi
Lei Wang
Yang Gao
Jiebo Luo
VLM
110
66
0
10 Sep 2021
Emerging Properties in Self-Supervised Vision Transformers
Emerging Properties in Self-Supervised Vision Transformers
Mathilde Caron
Hugo Touvron
Ishan Misra
Hervé Jégou
Julien Mairal
Piotr Bojanowski
Armand Joulin
317
5,775
0
29 Apr 2021
SiT: Self-supervised vIsion Transformer
SiT: Self-supervised vIsion Transformer
Sara Atito Ali Ahmed
Muhammad Awais
J. Kittler
ViT
33
139
0
08 Apr 2021
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction
  without Convolutions
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions
Wenhai Wang
Enze Xie
Xiang Li
Deng-Ping Fan
Kaitao Song
Ding Liang
Tong Lu
Ping Luo
Ling Shao
ViT
277
3,622
0
24 Feb 2021
Improved Baselines with Momentum Contrastive Learning
Improved Baselines with Momentum Contrastive Learning
Xinlei Chen
Haoqi Fan
Ross B. Girshick
Kaiming He
SSL
267
3,369
0
09 Mar 2020
1