Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2112.10740
Cited By
Are Large-scale Datasets Necessary for Self-Supervised Pre-training?
20 December 2021
Alaaeldin El-Nouby
Gautier Izacard
Hugo Touvron
Ivan Laptev
Hervé Jégou
Edouard Grave
SSL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Are Large-scale Datasets Necessary for Self-Supervised Pre-training?"
9 / 109 papers shown
Title
Corrupted Image Modeling for Self-Supervised Visual Pre-Training
Yuxin Fang
Li Dong
Hangbo Bao
Xinggang Wang
Furu Wei
17
87
0
07 Feb 2022
Context Autoencoder for Self-Supervised Representation Learning
Xiaokang Chen
Mingyu Ding
Xiaodi Wang
Ying Xin
Shentong Mo
Yunhao Wang
Shumin Han
Ping Luo
Gang Zeng
Jingdong Wang
SSL
45
386
0
07 Feb 2022
SLIP: Self-supervision meets Language-Image Pre-training
Norman Mu
Alexander Kirillov
David A. Wagner
Saining Xie
VLM
CLIP
60
476
0
23 Dec 2021
Masked Autoencoders Are Scalable Vision Learners
Kaiming He
Xinlei Chen
Saining Xie
Yanghao Li
Piotr Dollár
Ross B. Girshick
ViT
TPM
305
7,434
0
11 Nov 2021
VisDA-2021 Competition Universal Domain Adaptation to Improve Performance on Out-of-Distribution Data
D. Bashkirova
Dan Hendrycks
Donghyun Kim
Samarth Mishra
Kate Saenko
Kuniaki Saito
Piotr Teterwak
Ben Usman
OOD
10
19
0
23 Jul 2021
Emerging Properties in Self-Supervised Vision Transformers
Mathilde Caron
Hugo Touvron
Ishan Misra
Hervé Jégou
Julien Mairal
Piotr Bojanowski
Armand Joulin
314
5,775
0
29 Apr 2021
Zero-Shot Text-to-Image Generation
Aditya A. Ramesh
Mikhail Pavlov
Gabriel Goh
Scott Gray
Chelsea Voss
Alec Radford
Mark Chen
Ilya Sutskever
VLM
255
4,777
0
24 Feb 2021
Pre-training without Natural Images
Hirokatsu Kataoka
Kazushige Okayasu
Asato Matsumoto
Eisuke Yamagata
Ryosuke Yamada
Nakamasa Inoue
Akio Nakamura
Y. Satoh
79
116
0
21 Jan 2021
CrossTransformers: spatially-aware few-shot transfer
Carl Doersch
Ankush Gupta
Andrew Zisserman
ViT
201
330
0
22 Jul 2020
Previous
1
2
3