Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2307.14710
Cited By
Pre-training Vision Transformers with Very Limited Synthesized Images
27 July 2023
Ryo Nakamura1
Hirokatsu Kataoka
Sora Takashima
Edgar Josafat Martinez-Noriega
Rio Yokota
Nakamasa Inoue
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Pre-training Vision Transformers with Very Limited Synthesized Images"
9 / 9 papers shown
Title
MoireDB: Formula-generated Interference-fringe Image Dataset
Yuto Matsuo
Ryo Hayamizu
Hirokatsu Kataoka
Akio Nakamura
37
0
0
03 Feb 2025
Data Collection-free Masked Video Modeling
Yuchi Ishikawa
Masayoshi Kondo
Yoshimitsu Aoki
ViT
19
1
0
10 Sep 2024
Scaling Backwards: Minimal Synthetic Pre-training?
Ryo Nakamura
Ryu Tadokoro
Ryosuke Yamada
Tim Puhlfürß
Iro Laina
Christian Rupprecht
Walid Maalej
Rio Yokota
Hirokatsu Kataoka
DD
22
2
0
01 Aug 2024
Reinforcement Learning with Generative Models for Compact Support Sets
Nico Schiavone
Xingyu Li
16
0
0
25 Apr 2024
Training Vision Transformers with Only 2040 Images
Yunhao Cao
Hao Yu
Jianxin Wu
ViT
110
42
0
26 Jan 2022
Masked Autoencoders Are Scalable Vision Learners
Kaiming He
Xinlei Chen
Saining Xie
Yanghao Li
Piotr Dollár
Ross B. Girshick
ViT
TPM
305
7,434
0
11 Nov 2021
Improving Fractal Pre-training
Connor Anderson
Ryan Farrell
88
27
0
06 Oct 2021
Emerging Properties in Self-Supervised Vision Transformers
Mathilde Caron
Hugo Touvron
Ishan Misra
Hervé Jégou
Julien Mairal
Piotr Bojanowski
Armand Joulin
317
5,775
0
29 Apr 2021
Pre-training without Natural Images
Hirokatsu Kataoka
Kazushige Okayasu
Asato Matsumoto
Eisuke Yamagata
Ryosuke Yamada
Nakamasa Inoue
Akio Nakamura
Y. Satoh
79
116
0
21 Jan 2021
1