ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2112.10740
  4. Cited By
Are Large-scale Datasets Necessary for Self-Supervised Pre-training?

Are Large-scale Datasets Necessary for Self-Supervised Pre-training?

20 December 2021
Alaaeldin El-Nouby
Gautier Izacard
Hugo Touvron
Ivan Laptev
Hervé Jégou
Edouard Grave
    SSL
ArXivPDFHTML

Papers citing "Are Large-scale Datasets Necessary for Self-Supervised Pre-training?"

50 / 109 papers shown
Title
Predicting butterfly species presence from satellite imagery using soft contrastive regularisation
Predicting butterfly species presence from satellite imagery using soft contrastive regularisation
Thijs L van der Plas
Stephen Law
Michael JO Pocock
16
0
0
14 May 2025
Vanishing Depth: A Depth Adapter with Positional Depth Encoding for Generalized Image Encoders
Vanishing Depth: A Depth Adapter with Positional Depth Encoding for Generalized Image Encoders
Paul Koch
Jörg Krüger
Ankit Chowdhury
O. Heimann
MDE
55
0
0
25 Mar 2025
A Data-Centric Revisit of Pre-Trained Vision Models for Robot Learning
A Data-Centric Revisit of Pre-Trained Vision Models for Robot Learning
Xin Wen
Bingchen Zhao
Yilun Chen
Jiangmiao Pang
Xiaojuan Qi
LM&Ro
46
0
0
10 Mar 2025
MIM-Refiner: A Contrastive Learning Boost from Intermediate Pre-Trained Representations
MIM-Refiner: A Contrastive Learning Boost from Intermediate Pre-Trained Representations
Benedikt Alkin
Lukas Miklautz
Sepp Hochreiter
Johannes Brandstetter
VLM
65
8
0
24 Feb 2025
Intelligent Anomaly Detection for Lane Rendering Using Transformer with Self-Supervised Pre-Training and Customized Fine-Tuning
Intelligent Anomaly Detection for Lane Rendering Using Transformer with Self-Supervised Pre-Training and Customized Fine-Tuning
Yongqi Dong
Xingmin Lu
Ruohan Li
Wei Song
B. Arem
Haneen Farah
ViT
107
1
0
21 Feb 2025
The Dynamic Duo of Collaborative Masking and Target for Advanced Masked
  Autoencoder Learning
The Dynamic Duo of Collaborative Masking and Target for Advanced Masked Autoencoder Learning
Shentong Mo
37
0
0
23 Dec 2024
Self-Supervised Learning for Real-World Object Detection: a Survey
Self-Supervised Learning for Real-World Object Detection: a Survey
Alina Ciocarlan
Sidonie Lefebvre
S. L. Hégarat-Mascle
Arnaud Woiselle
ObjD
34
0
0
09 Oct 2024
Sapiens: Foundation for Human Vision Models
Sapiens: Foundation for Human Vision Models
Rawal Khirodkar
Timur M. Bagautdinov
Julieta Martinez
Su Zhaoen
Austin James
Peter Selednik
Stuart Anderson
Shunsuke Saito
VLM
36
64
0
22 Aug 2024
Trajectory-aligned Space-time Tokens for Few-shot Action Recognition
Trajectory-aligned Space-time Tokens for Few-shot Action Recognition
Pulkit Kumar
Namitha Padmanabhan
Luke Luo
Sai Saketh Rambhatla
Abhinav Shrivastava
32
4
0
25 Jul 2024
ColorMAE: Exploring data-independent masking strategies in Masked
  AutoEncoders
ColorMAE: Exploring data-independent masking strategies in Masked AutoEncoders
Carlos Hinojosa
Shuming Liu
Bernard Ghanem
24
0
0
17 Jul 2024
4M-21: An Any-to-Any Vision Model for Tens of Tasks and Modalities
4M-21: An Any-to-Any Vision Model for Tens of Tasks and Modalities
Roman Bachmann
Oğuzhan Fatih Kar
David Mizrahi
Ali Garjani
Mingfei Gao
David Griffiths
Jiaming Hu
Afshin Dehghan
Amir Zamir
MoE
VLM
MLLM
36
14
0
13 Jun 2024
Federated Document Visual Question Answering: A Pilot Study
Federated Document Visual Question Answering: A Pilot Study
Khanh Nguyen
Dimosthenis Karatzas
FedML
39
0
0
10 May 2024
101 Billion Arabic Words Dataset
101 Billion Arabic Words Dataset
Manel Aloui
Hasna Chouikhi
Ghaith Chaabane
Haithem Kchaou
Chehir Dhaouadi
36
1
0
29 Apr 2024
An Experimental Study on Exploring Strong Lightweight Vision
  Transformers via Masked Image Modeling Pre-Training
An Experimental Study on Exploring Strong Lightweight Vision Transformers via Masked Image Modeling Pre-Training
Jin Gao
Shubo Lin
Shaoru Wang
Yutong Kou
Zeming Li
Liang Li
Congxuan Zhang
Xiaoqin Zhang
Yizheng Wang
Weiming Hu
39
1
0
18 Apr 2024
Masked Modeling Duo: Towards a Universal Audio Pre-training Framework
Masked Modeling Duo: Towards a Universal Audio Pre-training Framework
Daisuke Niizumi
Daiki Takeuchi
Yasunori Ohishi
Noboru Harada
K. Kashino
34
10
0
09 Apr 2024
On Pretraining Data Diversity for Self-Supervised Learning
On Pretraining Data Diversity for Self-Supervised Learning
Hasan Hammoud
Tuhin Das
Fabio Pizzati
Philip H. S. Torr
Adel Bibi
Bernard Ghanem
100
2
0
20 Mar 2024
Synthetic Privileged Information Enhances Medical Image Representation
  Learning
Synthetic Privileged Information Enhances Medical Image Representation Learning
Lucas Farndale
Chris Walsh
R. Insall
Ke Yuan
12
1
0
08 Mar 2024
Scalable Pre-training of Large Autoregressive Image Models
Scalable Pre-training of Large Autoregressive Image Models
Alaaeldin El-Nouby
Michal Klein
Shuangfei Zhai
Miguel Angel Bautista
Alexander Toshev
Vaishaal Shankar
J. Susskind
Armand Joulin
VLM
20
71
0
16 Jan 2024
Masked Modeling for Self-supervised Representation Learning on Vision
  and Beyond
Masked Modeling for Self-supervised Representation Learning on Vision and Beyond
Siyuan Li
Luyuan Zhang
Zedong Wang
Di Wu
Lirong Wu
...
Jun-Xiong Xia
Cheng Tan
Yang Liu
Baigui Sun
Stan Z. Li
SSL
33
14
0
31 Dec 2023
Morphing Tokens Draw Strong Masked Image Models
Morphing Tokens Draw Strong Masked Image Models
Taekyung Kim
Byeongho Heo
Dongyoon Han
49
3
0
30 Dec 2023
4M: Massively Multimodal Masked Modeling
4M: Massively Multimodal Masked Modeling
David Mizrahi
Roman Bachmann
Ouguzhan Fatih Kar
Teresa Yeo
Mingfei Gao
Afshin Dehghan
Amir Zamir
MLLM
41
62
0
11 Dec 2023
Can semi-supervised learning use all the data effectively? A lower bound
  perspective
Can semi-supervised learning use all the data effectively? A lower bound perspective
Alexandru cTifrea
Gizem Yüce
Amartya Sanyal
Fanny Yang
34
3
0
30 Nov 2023
Asymmetric Masked Distillation for Pre-Training Small Foundation Models
Asymmetric Masked Distillation for Pre-Training Small Foundation Models
Zhiyu Zhao
Bingkun Huang
Sen Xing
Gangshan Wu
Yu Qiao
Limin Wang
29
5
0
06 Nov 2023
Better with Less: A Data-Active Perspective on Pre-Training Graph Neural
  Networks
Better with Less: A Data-Active Perspective on Pre-Training Graph Neural Networks
Jiarong Xu
Renhong Huang
Xin Jiang
Yuxuan Cao
Carl Yang
Chunping Wang
Yang Yang
AI4CE
31
14
0
02 Nov 2023
Learning with Unmasked Tokens Drives Stronger Vision Learners
Learning with Unmasked Tokens Drives Stronger Vision Learners
Taekyung Kim
Sanghyuk Chun
Byeongho Heo
Dongyoon Han
SSL
34
1
0
20 Oct 2023
Heuristic Vision Pre-Training with Self-Supervised and Supervised
  Multi-Task Learning
Heuristic Vision Pre-Training with Self-Supervised and Supervised Multi-Task Learning
Zhiming Qian
VLM
SSL
14
0
0
11 Oct 2023
Never Train from Scratch: Fair Comparison of Long-Sequence Models
  Requires Data-Driven Priors
Never Train from Scratch: Fair Comparison of Long-Sequence Models Requires Data-Driven Priors
Ido Amos
Jonathan Berant
Ankit Gupta
25
24
0
04 Oct 2023
M$^{3}$3D: Learning 3D priors using Multi-Modal Masked Autoencoders for
  2D image and video understanding
M3^{3}33D: Learning 3D priors using Multi-Modal Masked Autoencoders for 2D image and video understanding
Muhammad Abdullah Jamal
Omid Mohareri
3DPC
16
1
0
26 Sep 2023
Masked Image Residual Learning for Scaling Deeper Vision Transformers
Masked Image Residual Learning for Scaling Deeper Vision Transformers
Guoxi Huang
Hongtao Fu
A. Bors
26
7
0
25 Sep 2023
MOCA: Self-supervised Representation Learning by Predicting Masked
  Online Codebook Assignments
MOCA: Self-supervised Representation Learning by Predicting Masked Online Codebook Assignments
Spyros Gidaris
Andrei Bursuc
Oriane Siméoni
Antonín Vobecký
N. Komodakis
Matthieu Cord
Patrick Pérez
SSL
ViT
16
3
0
18 Jul 2023
DreamTeacher: Pretraining Image Backbones with Deep Generative Models
DreamTeacher: Pretraining Image Backbones with Deep Generative Models
Daiqing Li
Huan Ling
Amlan Kar
David Acuna
Seung Wook Kim
Karsten Kreis
Antonio Torralba
Sanja Fidler
VLM
DiffM
17
26
0
14 Jul 2023
Task-Robust Pre-Training for Worst-Case Downstream Adaptation
Task-Robust Pre-Training for Worst-Case Downstream Adaptation
Jianghui Wang
Cheng Yang
Xingyu Xie
Cong Fang
Zhouchen Lin
OOD
26
0
0
21 Jun 2023
Learning to Mask and Permute Visual Tokens for Vision Transformer Pre-Training
Learning to Mask and Permute Visual Tokens for Vision Transformer Pre-Training
Lorenzo Baraldi
Roberto Amoroso
Marcella Cornia
Lorenzo Baraldi
Andrea Pilzer
Rita Cucchiara
38
2
0
12 Jun 2023
Hiera: A Hierarchical Vision Transformer without the Bells-and-Whistles
Hiera: A Hierarchical Vision Transformer without the Bells-and-Whistles
Chaitanya K. Ryali
Yuan-Ting Hu
Daniel Bolya
Chen Wei
Haoqi Fan
...
Omid Poursaeed
Judy Hoffman
Jitendra Malik
Yanghao Li
Christoph Feichtenhofer
3DH
43
159
0
01 Jun 2023
Robust Lane Detection through Self Pre-training with Masked Sequential
  Autoencoders and Fine-tuning with Customized PolyLoss
Robust Lane Detection through Self Pre-training with Masked Sequential Autoencoders and Fine-tuning with Customized PolyLoss
Ruohan Li
Yongqi Dong
21
4
0
26 May 2023
Objectives Matter: Understanding the Impact of Self-Supervised
  Objectives on Vision Transformer Representations
Objectives Matter: Understanding the Impact of Self-Supervised Objectives on Vision Transformer Representations
Shashank Shekhar
Florian Bordes
Pascal Vincent
Ari S. Morcos
16
10
0
25 Apr 2023
A Cookbook of Self-Supervised Learning
A Cookbook of Self-Supervised Learning
Randall Balestriero
Mark Ibrahim
Vlad Sobal
Ari S. Morcos
Shashank Shekhar
...
Pierre Fernandez
Amir Bar
Hamed Pirsiavash
Yann LeCun
Micah Goldblum
SyDa
FedML
SSL
44
273
0
24 Apr 2023
Contrastive Tuning: A Little Help to Make Masked Autoencoders Forget
Contrastive Tuning: A Little Help to Make Masked Autoencoders Forget
Johannes Lehner
Benedikt Alkin
Andreas Fürst
Elisabeth Rumetshofer
Lukas Miklautz
Sepp Hochreiter
23
18
0
20 Apr 2023
Self-Supervised Learning from Non-Object Centric Images with a Geometric
  Transformation Sensitive Architecture
Self-Supervised Learning from Non-Object Centric Images with a Geometric Transformation Sensitive Architecture
Taeho Kim
Jong-Min Lee
26
0
0
17 Apr 2023
DINOv2: Learning Robust Visual Features without Supervision
DINOv2: Learning Robust Visual Features without Supervision
Maxime Oquab
Timothée Darcet
Théo Moutakanni
Huy Q. Vo
Marc Szafraniec
...
Hervé Jégou
Julien Mairal
Patrick Labatut
Armand Joulin
Piotr Bojanowski
VLM
CLIP
SSL
101
3,017
0
14 Apr 2023
Mixed Autoencoder for Self-supervised Visual Representation Learning
Mixed Autoencoder for Self-supervised Visual Representation Learning
Kai Chen
Zhili Liu
Lanqing Hong
Hang Xu
Zhenguo Li
Dit-Yan Yeung
SSL
30
42
0
30 Mar 2023
Masked Scene Contrast: A Scalable Framework for Unsupervised 3D
  Representation Learning
Masked Scene Contrast: A Scalable Framework for Unsupervised 3D Representation Learning
Xiaoyang Wu
Xin Wen
Xihui Liu
Hengshuang Zhao
3DPC
112
40
0
24 Mar 2023
Coreset Sampling from Open-Set for Fine-Grained Self-Supervised Learning
Coreset Sampling from Open-Set for Fine-Grained Self-Supervised Learning
Sungnyun Kim
Sangmin Bae
Se-Young Yun
22
9
0
20 Mar 2023
SeqCo-DETR: Sequence Consistency Training for Self-Supervised Object
  Detection with Transformers
SeqCo-DETR: Sequence Consistency Training for Self-Supervised Object Detection with Transformers
Guoqiang Jin
Fan Yang
Mingshan Sun
R. Zhao
Yakun Liu
Wei Li
Tianpeng Bao
Liwei Wu
Xingyu Zeng
Rui Zhao
ViT
22
2
0
15 Mar 2023
Traj-MAE: Masked Autoencoders for Trajectory Prediction
Traj-MAE: Masked Autoencoders for Trajectory Prediction
Hao Chen
Jiaze Wang
Kun Shao
Furui Liu
Jianye Hao
Chenyong Guan
Guangyong Chen
Pheng-Ann Heng
57
38
0
12 Mar 2023
Overwriting Pretrained Bias with Finetuning Data
Overwriting Pretrained Bias with Finetuning Data
Angelina Wang
Olga Russakovsky
21
29
0
10 Mar 2023
Rethinking Self-Supervised Visual Representation Learning in
  Pre-training for 3D Human Pose and Shape Estimation
Rethinking Self-Supervised Visual Representation Learning in Pre-training for 3D Human Pose and Shape Estimation
Hongsuk Choi
Hyeongjin Nam
T. Lee
Gyeongsik Moon
Kyoung Mu Lee
34
7
0
09 Mar 2023
Centroid-centered Modeling for Efficient Vision Transformer Pre-training
Centroid-centered Modeling for Efficient Vision Transformer Pre-training
Xin Yan
Zuchao Li
Lefei Zhang
Bo Du
Dacheng Tao
VLM
30
0
0
08 Mar 2023
Efficient Masked Autoencoders with Self-Consistency
Efficient Masked Autoencoders with Self-Consistency
Zhaowen Li
Yousong Zhu
Zhiyang Chen
Wei Li
Chaoyang Zhao
Rui Zhao
Ming Tang
Jinqiao Wang
50
2
0
28 Feb 2023
Revisiting Hidden Representations in Transfer Learning for Medical
  Imaging
Revisiting Hidden Representations in Transfer Learning for Medical Imaging
Dovile Juodelyte
Amelia Jiménez-Sánchez
V. Cheplygina
OOD
11
1
0
16 Feb 2023
123
Next