ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2112.10740
  4. Cited By
Are Large-scale Datasets Necessary for Self-Supervised Pre-training?

Are Large-scale Datasets Necessary for Self-Supervised Pre-training?

20 December 2021
Alaaeldin El-Nouby
Gautier Izacard
Hugo Touvron
Ivan Laptev
Hervé Jégou
Edouard Grave
    SSL
ArXivPDFHTML

Papers citing "Are Large-scale Datasets Necessary for Self-Supervised Pre-training?"

50 / 109 papers shown
Title
Semantic Image Segmentation: Two Decades of Research
Semantic Image Segmentation: Two Decades of Research
G. Csurka
Riccardo Volpi
Boris Chidlovskii
3DV
26
50
0
13 Feb 2023
Understanding Self-Supervised Pretraining with Part-Aware Representation
  Learning
Understanding Self-Supervised Pretraining with Part-Aware Representation Learning
Jie Zhu
Jiyang Qi
Mingyu Ding
Xiaokang Chen
Ping Luo
Xinggang Wang
Wenyu Liu
Leye Wang
Jingdong Wang
SSL
33
8
0
27 Jan 2023
An Experimental Study on Pretraining Transformers from Scratch for IR
An Experimental Study on Pretraining Transformers from Scratch for IR
Carlos Lassance
Hervé Déjean
S. Clinchant
26
11
0
25 Jan 2023
Self-Supervised Learning from Images with a Joint-Embedding Predictive
  Architecture
Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture
Mahmoud Assran
Quentin Duval
Ishan Misra
Piotr Bojanowski
Pascal Vincent
Michael G. Rabbat
Yann LeCun
Nicolas Ballas
SSL
AI4TS
MDE
29
318
0
19 Jan 2023
Disjoint Masking with Joint Distillation for Efficient Masked Image
  Modeling
Disjoint Masking with Joint Distillation for Efficient Masked Image Modeling
Xin Ma
Chang-Shu Liu
Chunyu Xie
Long Ye
Yafeng Deng
Xiang Ji
25
9
0
31 Dec 2022
Image Compression with Product Quantized Masked Image Modeling
Image Compression with Product Quantized Masked Image Modeling
Alaaeldin El-Nouby
Matthew Muckley
Karen Ullrich
Ivan Laptev
Jakob Verbeek
Hervé Jégou
MQ
24
31
0
14 Dec 2022
Masked autoencoders are effective solution to transformer data-hungry
Masked autoencoders are effective solution to transformer data-hungry
Jia-ju Mao
Honggu Zhou
Xuesong Yin
Binling Nie
MedIm
27
6
0
12 Dec 2022
Audiovisual Masked Autoencoders
Audiovisual Masked Autoencoders
Mariana-Iuliana Georgescu
Eduardo Fonseca
Radu Tudor Ionescu
Mario Lucic
Cordelia Schmid
Anurag Arnab
SSL
32
43
0
09 Dec 2022
Co-training $2^L$ Submodels for Visual Recognition
Co-training 2L2^L2L Submodels for Visual Recognition
Hugo Touvron
Matthieu Cord
Maxime Oquab
Piotr Bojanowski
Jakob Verbeek
Hervé Jégou
VLM
29
9
0
09 Dec 2022
ResFormer: Scaling ViTs with Multi-Resolution Training
ResFormer: Scaling ViTs with Multi-Resolution Training
Rui Tian
Zuxuan Wu
Qiuju Dai
Hang-Rui Hu
Yu Qiao
Yu-Gang Jiang
ViT
19
31
0
01 Dec 2022
Simplifying and Understanding State Space Models with Diagonal Linear
  RNNs
Simplifying and Understanding State Space Models with Diagonal Linear RNNs
Ankit Gupta
Harsh Mehta
Jonathan Berant
27
21
0
01 Dec 2022
CroCo v2: Improved Cross-view Completion Pre-training for Stereo
  Matching and Optical Flow
CroCo v2: Improved Cross-view Completion Pre-training for Stereo Matching and Optical Flow
Philippe Weinzaepfel
Thomas Lucas
Vincent Leroy
Yohann Cabon
Vaibhav Arora
Romain Brégier
G. Csurka
L. Antsfeld
Boris Chidlovskii
Jérôme Revaud
ViT
20
80
0
18 Nov 2022
CAE v2: Context Autoencoder with CLIP Target
CAE v2: Context Autoencoder with CLIP Target
Xinyu Zhang
Jiahui Chen
Junkun Yuan
Qiang Chen
Jian Wang
...
Jimin Pi
Kun Yao
Junyu Han
Errui Ding
Jingdong Wang
VLM
CLIP
42
24
0
17 Nov 2022
Stare at What You See: Masked Image Modeling without Reconstruction
Stare at What You See: Masked Image Modeling without Reconstruction
Hongwei Xue
Peng Gao
Hongyang Li
Yu Qiao
Hao Sun
Houqiang Li
Jiebo Luo
25
31
0
16 Nov 2022
MARLIN: Masked Autoencoder for facial video Representation LearnINg
MARLIN: Masked Autoencoder for facial video Representation LearnINg
Zhixi Cai
Shreya Ghosh
Kalin Stefanov
Abhinav Dhall
Jianfei Cai
Hamid Rezatofighi
Reza Haffari
Munawar Hayat
ViT
CVBM
20
60
0
12 Nov 2022
Masked Modeling Duo: Learning Representations by Encouraging Both
  Networks to Model the Input
Masked Modeling Duo: Learning Representations by Encouraging Both Networks to Model the Input
Daisuke Niizumi
Daiki Takeuchi
Yasunori Ohishi
Noboru Harada
K. Kashino
SSL
34
30
0
26 Oct 2022
Self-Supervised Learning with Masked Image Modeling for Teeth Numbering,
  Detection of Dental Restorations, and Instance Segmentation in Dental
  Panoramic Radiographs
Self-Supervised Learning with Masked Image Modeling for Teeth Numbering, Detection of Dental Restorations, and Instance Segmentation in Dental Panoramic Radiographs
A. Almalki
Longin Jan Latecki
MedIm
20
14
0
20 Oct 2022
Towards Sustainable Self-supervised Learning
Towards Sustainable Self-supervised Learning
Shanghua Gao
Pan Zhou
Mingg-Ming Cheng
Shuicheng Yan
CLL
40
7
0
20 Oct 2022
CroCo: Self-Supervised Pre-training for 3D Vision Tasks by Cross-View
  Completion
CroCo: Self-Supervised Pre-training for 3D Vision Tasks by Cross-View Completion
Philippe Weinzaepfel
Vincent Leroy
Thomas Lucas
Romain Brégier
Yohann Cabon
Vaibhav Arora
L. Antsfeld
Boris Chidlovskii
G. Csurka
Jérôme Revaud
SSL
36
64
0
19 Oct 2022
A Unified View of Masked Image Modeling
A Unified View of Masked Image Modeling
Zhiliang Peng
Li Dong
Hangbo Bao
QiXiang Ye
Furu Wei
VLM
52
35
0
19 Oct 2022
The Hidden Uniform Cluster Prior in Self-Supervised Learning
The Hidden Uniform Cluster Prior in Self-Supervised Learning
Mahmoud Assran
Randall Balestriero
Quentin Duval
Florian Bordes
Ishan Misra
Piotr Bojanowski
Pascal Vincent
Michael G. Rabbat
Nicolas Ballas
SSL
44
46
0
13 Oct 2022
Exploring Long-Sequence Masked Autoencoders
Exploring Long-Sequence Masked Autoencoders
Ronghang Hu
Shoubhik Debnath
Saining Xie
Xinlei Chen
8
18
0
13 Oct 2022
VICRegL: Self-Supervised Learning of Local Visual Features
VICRegL: Self-Supervised Learning of Local Visual Features
Adrien Bardes
Jean Ponce
Yann LeCun
SSL
17
119
0
04 Oct 2022
Federated Training of Dual Encoding Models on Small Non-IID Client
  Datasets
Federated Training of Dual Encoding Models on Small Non-IID Client Datasets
Raviteja Vemulapalli
Warren Morningstar
Philip Mansfield
Hubert Eichner
K. Singhal
Arash Afkanpour
Bradley Green
FedML
27
2
0
30 Sep 2022
Downstream Datasets Make Surprisingly Good Pretraining Corpora
Downstream Datasets Make Surprisingly Good Pretraining Corpora
Kundan Krishna
Saurabh Garg
Jeffrey P. Bigham
Zachary Chase Lipton
38
30
0
28 Sep 2022
On the Surprising Effectiveness of Transformers in Low-Labeled Video
  Recognition
On the Surprising Effectiveness of Transformers in Low-Labeled Video Recognition
Farrukh Rahman
Ömer Mubarek
Z. Kira
ViT
10
2
0
15 Sep 2022
BEiT v2: Masked Image Modeling with Vector-Quantized Visual Tokenizers
BEiT v2: Masked Image Modeling with Vector-Quantized Visual Tokenizers
Zhiliang Peng
Li Dong
Hangbo Bao
QiXiang Ye
Furu Wei
16
304
0
12 Aug 2022
MILAN: Masked Image Pretraining on Language Assisted Representation
MILAN: Masked Image Pretraining on Language Assisted Representation
Zejiang Hou
Fei Sun
Yen-kuang Chen
Yuan Xie
S. Kung
ViT
26
68
0
11 Aug 2022
Understanding Masked Image Modeling via Learning Occlusion Invariant
  Feature
Understanding Masked Image Modeling via Learning Occlusion Invariant Feature
Xiangwen Kong
Xiangyu Zhang
SSL
27
53
0
08 Aug 2022
SdAE: Self-distillated Masked Autoencoder
SdAE: Self-distillated Masked Autoencoder
Yabo Chen
Yuchen Liu
Dongsheng Jiang
Xiaopeng Zhang
Wenrui Dai
H. Xiong
Qi Tian
ViT
18
70
0
31 Jul 2022
A Survey on Masked Autoencoder for Self-supervised Learning in Vision
  and Beyond
A Survey on Masked Autoencoder for Self-supervised Learning in Vision and Beyond
Chaoning Zhang
Chenshuang Zhang
Junha Song
John Seon Keun Yi
Kang Zhang
In So Kweon
SSL
52
71
0
30 Jul 2022
Bootstrapped Masked Autoencoders for Vision BERT Pretraining
Bootstrapped Masked Autoencoders for Vision BERT Pretraining
Xiaoyi Dong
Jianmin Bao
Ting Zhang
Dongdong Chen
Weiming Zhang
Lu Yuan
Dong Chen
Fang Wen
Nenghai Yu
22
75
0
14 Jul 2022
Consecutive Pretraining: A Knowledge Transfer Learning Strategy with
  Relevant Unlabeled Data for Remote Sensing Domain
Consecutive Pretraining: A Knowledge Transfer Learning Strategy with Relevant Unlabeled Data for Remote Sensing Domain
Tong Zhang
Peng Gao
Hao-Chen Dong
Zhuang Yin
Guanqun Wang
Wei Zhang
He Chen
20
33
0
08 Jul 2022
Revisiting Pretraining Objectives for Tabular Deep Learning
Revisiting Pretraining Objectives for Tabular Deep Learning
Ivan Rubachev
Artem Alekberov
Yu. V. Gorishniy
Artem Babenko
LMTD
21
41
0
07 Jul 2022
OmniMAE: Single Model Masked Pretraining on Images and Videos
OmniMAE: Single Model Masked Pretraining on Images and Videos
Rohit Girdhar
Alaaeldin El-Nouby
Mannat Singh
Kalyan Vasudev Alwala
Armand Joulin
Ishan Misra
ViT
35
97
0
16 Jun 2022
Masked Frequency Modeling for Self-Supervised Visual Pre-Training
Masked Frequency Modeling for Self-Supervised Visual Pre-Training
Jiahao Xie
Wei Li
Xiaohang Zhan
Ziwei Liu
Yew-Soon Ong
Chen Change Loy
19
69
0
15 Jun 2022
Extreme Masking for Learning Instance and Distributed Visual
  Representations
Extreme Masking for Learning Instance and Distributed Visual Representations
Zhirong Wu
Zihang Lai
Xiao Sun
Stephen Lin
32
22
0
09 Jun 2022
On Data Scaling in Masked Image Modeling
On Data Scaling in Masked Image Modeling
Zhenda Xie
Zheng-Wei Zhang
Yue Cao
Yutong Lin
Yixuan Wei
Qi Dai
Han Hu
29
52
0
09 Jun 2022
Spatial Entropy as an Inductive Bias for Vision Transformers
Spatial Entropy as an Inductive Bias for Vision Transformers
E. Peruzzo
E. Sangineto
Yahui Liu
Marco De Nadai
Wei Bi
Bruno Lepri
N. Sebe
ViT
MDE
31
1
0
09 Jun 2022
Siamese Image Modeling for Self-Supervised Vision Representation
  Learning
Siamese Image Modeling for Self-Supervised Vision Representation Learning
Chenxin Tao
Xizhou Zhu
Weijie Su
Gao Huang
Bin Li
Jie Zhou
Yu Qiao
Xiaogang Wang
Jifeng Dai
SSL
35
94
0
02 Jun 2022
A Closer Look at Self-Supervised Lightweight Vision Transformers
A Closer Look at Self-Supervised Lightweight Vision Transformers
Shaoru Wang
Jin Gao
Zeming Li
Jian-jun Sun
Weiming Hu
ViT
67
41
0
28 May 2022
Object-wise Masked Autoencoders for Fast Pre-training
Object-wise Masked Autoencoders for Fast Pre-training
Jiantao Wu
Shentong Mo
ViT
OCL
17
15
0
28 May 2022
Contrastive Learning Rivals Masked Image Modeling in Fine-tuning via
  Feature Distillation
Contrastive Learning Rivals Masked Image Modeling in Fine-tuning via Feature Distillation
Yixuan Wei
Han Hu
Zhenda Xie
Zheng-Wei Zhang
Yue Cao
Jianmin Bao
Dong Chen
B. Guo
CLIP
86
124
0
27 May 2022
Architecture-Agnostic Masked Image Modeling -- From ViT back to CNN
Architecture-Agnostic Masked Image Modeling -- From ViT back to CNN
Siyuan Li
Di Wu
Fang Wu
Lei Shang
Stan.Z.Li
32
48
0
27 May 2022
RetroMAE: Pre-Training Retrieval-oriented Language Models Via Masked
  Auto-Encoder
RetroMAE: Pre-Training Retrieval-oriented Language Models Via Masked Auto-Encoder
Shitao Xiao
Zheng Liu
Yingxia Shao
Zhao Cao
RALM
118
109
0
24 May 2022
Masked Siamese Networks for Label-Efficient Learning
Masked Siamese Networks for Label-Efficient Learning
Mahmoud Assran
Mathilde Caron
Ishan Misra
Piotr Bojanowski
Florian Bordes
Pascal Vincent
Armand Joulin
Michael G. Rabbat
Nicolas Ballas
SSL
28
311
0
14 Apr 2022
DeiT III: Revenge of the ViT
DeiT III: Revenge of the ViT
Hugo Touvron
Matthieu Cord
Hervé Jégou
ViT
42
388
0
14 Apr 2022
Representation Learning by Detecting Incorrect Location Embeddings
Representation Learning by Detecting Incorrect Location Embeddings
Sepehr Sameni
Simon Jenni
Paolo Favaro
ViT
29
4
0
10 Apr 2022
MultiMAE: Multi-modal Multi-task Masked Autoencoders
MultiMAE: Multi-modal Multi-task Masked Autoencoders
Roman Bachmann
David Mizrahi
Andrei Atanov
Amir Zamir
35
265
0
04 Apr 2022
Three things everyone should know about Vision Transformers
Three things everyone should know about Vision Transformers
Hugo Touvron
Matthieu Cord
Alaaeldin El-Nouby
Jakob Verbeek
Hervé Jégou
ViT
18
119
0
18 Mar 2022
Previous
123
Next