ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2104.02057
  4. Cited By
An Empirical Study of Training Self-Supervised Vision Transformers

An Empirical Study of Training Self-Supervised Vision Transformers

5 April 2021
Xinlei Chen
Saining Xie
Kaiming He
    ViT
ArXivPDFHTML

Papers citing "An Empirical Study of Training Self-Supervised Vision Transformers"

50 / 469 papers shown
Title
Context Autoencoder for Self-Supervised Representation Learning
Context Autoencoder for Self-Supervised Representation Learning
Xiaokang Chen
Mingyu Ding
Xiaodi Wang
Ying Xin
Shentong Mo
Yunhao Wang
Shumin Han
Ping Luo
Gang Zeng
Jingdong Wang
SSL
45
386
0
07 Feb 2022
Training Vision Transformers with Only 2040 Images
Training Vision Transformers with Only 2040 Images
Yunhao Cao
Hao Yu
Jianxin Wu
ViT
112
43
0
26 Jan 2022
Leveraging Real Talking Faces via Self-Supervision for Robust Forgery
  Detection
Leveraging Real Talking Faces via Self-Supervision for Robust Forgery Detection
A. Haliassos
Rodrigo Mira
Stavros Petridis
M. Pantic
CVBM
40
126
0
18 Jan 2022
Transferability in Deep Learning: A Survey
Transferability in Deep Learning: A Survey
Junguang Jiang
Yang Shu
Jianmin Wang
Mingsheng Long
OOD
34
101
0
15 Jan 2022
Pushing the limits of self-supervised ResNets: Can we outperform
  supervised learning without labels on ImageNet?
Pushing the limits of self-supervised ResNets: Can we outperform supervised learning without labels on ImageNet?
Nenad Tomašev
Ioana Bica
Brian McWilliams
Lars Buesing
Razvan Pascanu
Charles Blundell
Jovana Mitrović
SSL
90
81
0
13 Jan 2022
BigDatasetGAN: Synthesizing ImageNet with Pixel-wise Annotations
BigDatasetGAN: Synthesizing ImageNet with Pixel-wise Annotations
Daiqing Li
Huan Ling
Seung Wook Kim
Karsten Kreis
Adela Barriuso
Sanja Fidler
Antonio Torralba
36
103
0
12 Jan 2022
Robust Contrastive Learning against Noisy Views
Robust Contrastive Learning against Noisy Views
Ching-Yao Chuang
R. Devon Hjelm
Xin Wang
Vibhav Vineet
Neel Joshi
Antonio Torralba
Stefanie Jegelka
Ya-heng Song
NoLa
13
68
0
12 Jan 2022
SLIP: Self-supervision meets Language-Image Pre-training
SLIP: Self-supervision meets Language-Image Pre-training
Norman Mu
Alexander Kirillov
David Wagner
Saining Xie
VLM
CLIP
63
480
0
23 Dec 2021
Improved skin lesion recognition by a Self-Supervised Curricular Deep
  Learning approach
Improved skin lesion recognition by a Self-Supervised Curricular Deep Learning approach
Kirill Sirotkin
Marcos Escudero-Viñolo
Pablo Carballeira
Juan C. Sanmiguel
SSL
32
6
0
22 Dec 2021
Meta-Learning and Self-Supervised Pretraining for Real World Image
  Translation
Meta-Learning and Self-Supervised Pretraining for Real World Image Translation
Ileana Rugina
Rumen Dangovski
Mark S. Veillette
Pooya Khorrami
Brian Cheung
Olga Simek
M. Soljavcić
VLM
SSL
25
2
0
22 Dec 2021
Are Large-scale Datasets Necessary for Self-Supervised Pre-training?
Are Large-scale Datasets Necessary for Self-Supervised Pre-training?
Alaaeldin El-Nouby
Gautier Izacard
Hugo Touvron
Ivan Laptev
Hervé Jégou
Edouard Grave
SSL
27
149
0
20 Dec 2021
Masked Feature Prediction for Self-Supervised Visual Pre-Training
Masked Feature Prediction for Self-Supervised Visual Pre-Training
Chen Wei
Haoqi Fan
Saining Xie
Chaoxia Wu
Alan Yuille
Christoph Feichtenhofer
ViT
94
655
0
16 Dec 2021
Towards General and Efficient Active Learning
Towards General and Efficient Active Learning
Yichen Xie
Masayoshi Tomizuka
Wei Zhan
VLM
35
10
0
15 Dec 2021
Self-Supervised Modality-Aware Multiple Granularity Pre-Training for
  RGB-Infrared Person Re-Identification
Self-Supervised Modality-Aware Multiple Granularity Pre-Training for RGB-Infrared Person Re-Identification
Lin Wan
Qianyan Jing
Zongyuan Sun
Chuan Zhang
Zhihang Li
Yehansen Chen
SSL
17
5
0
12 Dec 2021
General Facial Representation Learning in a Visual-Linguistic Manner
General Facial Representation Learning in a Visual-Linguistic Manner
Yinglin Zheng
Hao Yang
Ting Zhang
Jianmin Bao
Dongdong Chen
Yangyu Huang
Lu Yuan
Dong Chen
Ming Zeng
Fang Wen
CVBM
146
163
0
06 Dec 2021
BEVT: BERT Pretraining of Video Transformers
BEVT: BERT Pretraining of Video Transformers
Rui Wang
Dongdong Chen
Zuxuan Wu
Yinpeng Chen
Xiyang Dai
Mengchen Liu
Yu-Gang Jiang
Luowei Zhou
Lu Yuan
ViT
39
203
0
02 Dec 2021
Self-supervised Video Transformer
Self-supervised Video Transformer
Kanchana Ranasinghe
Muzammal Naseer
Salman Khan
Fahad Shahbaz Khan
Michael S. Ryoo
ViT
39
84
0
02 Dec 2021
Boosting Discriminative Visual Representation Learning with
  Scenario-Agnostic Mixup
Boosting Discriminative Visual Representation Learning with Scenario-Agnostic Mixup
Siyuan Li
Zicheng Liu
Zedong Wang
Di Wu
Zihan Liu
Stan Z. Li
35
26
0
30 Nov 2021
MC-SSL0.0: Towards Multi-Concept Self-Supervised Learning
MC-SSL0.0: Towards Multi-Concept Self-Supervised Learning
Sara Atito
Muhammad Awais
Ammarah Farooq
Zhenhua Feng
J. Kittler
19
17
0
30 Nov 2021
Self-Supervised Pre-Training of Swin Transformers for 3D Medical Image
  Analysis
Self-Supervised Pre-Training of Swin Transformers for 3D Medical Image Analysis
Yucheng Tang
Dong Yang
Wenqi Li
H. Roth
Bennett Landman
Daguang Xu
V. Nath
Ali Hatamizadeh
ViT
MedIm
42
517
0
29 Nov 2021
SWAT: Spatial Structure Within and Among Tokens
SWAT: Spatial Structure Within and Among Tokens
Kumara Kahatapitiya
Michael S. Ryoo
25
6
0
26 Nov 2021
Contrastive Object-level Pre-training with Spatial Noise Curriculum
  Learning
Contrastive Object-level Pre-training with Spatial Noise Curriculum Learning
Chenhongyi Yang
Lichao Huang
Elliot J. Crowley
SSL
VLM
34
6
0
26 Nov 2021
PeCo: Perceptual Codebook for BERT Pre-training of Vision Transformers
PeCo: Perceptual Codebook for BERT Pre-training of Vision Transformers
Xiaoyi Dong
Jianmin Bao
Ting Zhang
Dongdong Chen
Weiming Zhang
Lu Yuan
Dong Chen
Fang Wen
Nenghai Yu
Baining Guo
ViT
50
239
0
24 Nov 2021
Benchmarking Detection Transfer Learning with Vision Transformers
Benchmarking Detection Transfer Learning with Vision Transformers
Yanghao Li
Saining Xie
Xinlei Chen
Piotr Dollar
Kaiming He
Ross B. Girshick
20
165
0
22 Nov 2021
Discrete Representations Strengthen Vision Transformer Robustness
Discrete Representations Strengthen Vision Transformer Robustness
Chengzhi Mao
Lu Jiang
Mostafa Dehghani
Carl Vondrick
Rahul Sukthankar
Irfan Essa
ViT
27
44
0
20 Nov 2021
SimMIM: A Simple Framework for Masked Image Modeling
SimMIM: A Simple Framework for Masked Image Modeling
Zhenda Xie
Zheng-Wei Zhang
Yue Cao
Yutong Lin
Jianmin Bao
Zhuliang Yao
Qi Dai
Han Hu
69
1,309
0
18 Nov 2021
LiT: Zero-Shot Transfer with Locked-image text Tuning
LiT: Zero-Shot Transfer with Locked-image text Tuning
Xiaohua Zhai
Tianlin Li
Basil Mustafa
Andreas Steiner
Daniel Keysers
Alexander Kolesnikov
Lucas Beyer
VLM
48
543
0
15 Nov 2021
iBOT: Image BERT Pre-Training with Online Tokenizer
iBOT: Image BERT Pre-Training with Online Tokenizer
Jinghao Zhou
Chen Wei
Huiyu Wang
Wei Shen
Cihang Xie
Alan Yuille
Tao Kong
21
711
0
15 Nov 2021
Masked Autoencoders Are Scalable Vision Learners
Masked Autoencoders Are Scalable Vision Learners
Kaiming He
Xinlei Chen
Saining Xie
Yanghao Li
Piotr Dollár
Ross B. Girshick
ViT
TPM
314
7,457
0
11 Nov 2021
A Survey of Visual Transformers
A Survey of Visual Transformers
Yang Liu
Yao Zhang
Yixin Wang
Feng Hou
Jin Yuan
Jiang Tian
Yang Zhang
Zhongchao Shi
Jianping Fan
Zhiqiang He
3DGS
ViT
77
330
0
11 Nov 2021
Probabilistic Contrastive Learning for Domain Adaptation
Probabilistic Contrastive Learning for Domain Adaptation
Junjie Li
Yixin Zhang
Zilei Wang
Saihui Hou
Keyu Tu
Man Zhang
36
14
0
11 Nov 2021
Are we ready for a new paradigm shift? A Survey on Visual Deep MLP
Are we ready for a new paradigm shift? A Survey on Visual Deep MLP
Ruiyang Liu
Hai-Tao Zheng
Li Tao
Dun Liang
Haitao Zheng
85
97
0
07 Nov 2021
SSAST: Self-Supervised Audio Spectrogram Transformer
SSAST: Self-Supervised Audio Spectrogram Transformer
Yuan Gong
Cheng-I Jeff Lai
Yu-An Chung
James R. Glass
ViT
38
267
0
19 Oct 2021
Self-Supervised Learning by Estimating Twin Class Distributions
Self-Supervised Learning by Estimating Twin Class Distributions
Feng Wang
Tao Kong
Rufeng Zhang
Huaping Liu
Hang Li
SSL
55
17
0
14 Oct 2021
The Impact of Spatiotemporal Augmentations on Self-Supervised
  Audiovisual Representation Learning
The Impact of Spatiotemporal Augmentations on Self-Supervised Audiovisual Representation Learning
Haider Al-Tahan
Y. Mohsenzadeh
SSL
AI4TS
34
0
0
13 Oct 2021
Dynamic Inference with Neural Interpreters
Dynamic Inference with Neural Interpreters
Nasim Rahaman
Muhammad Waleed Gondal
S. Joshi
Peter V. Gehler
Yoshua Bengio
Francesco Locatello
Bernhard Schölkopf
39
31
0
12 Oct 2021
Revitalizing CNN Attentions via Transformers in Self-Supervised Visual
  Representation Learning
Revitalizing CNN Attentions via Transformers in Self-Supervised Visual Representation Learning
Chongjian Ge
Youwei Liang
Yibing Song
Jianbo Jiao
Jue Wang
Ping Luo
ViT
24
36
0
11 Oct 2021
PASS: An ImageNet replacement for self-supervised pretraining without
  humans
PASS: An ImageNet replacement for self-supervised pretraining without humans
Yuki M. Asano
Christian Rupprecht
Andrew Zisserman
Andrea Vedaldi
VLM
SSL
21
57
0
27 Sep 2021
Homography augumented momentum constrastive learning for SAR image
  retrieval
Homography augumented momentum constrastive learning for SAR image retrieval
Seonho Park
M. Rysz
Kathleen M. Dipple
P. Pardalos
28
1
0
21 Sep 2021
A Study of the Generalizability of Self-Supervised Representations
A Study of the Generalizability of Self-Supervised Representations
Atharva Tendle
Mohammad Rashedul Hasan
76
27
0
19 Sep 2021
Self supervised learning improves dMMR/MSI detection from histology
  slides across multiple cancers
Self supervised learning improves dMMR/MSI detection from histology slides across multiple cancers
C. Saillard
Olivier Dehaene
Tanguy Marchand
O. Moindrot
A. Kamoun
B. Schmauch
S. Jégou
38
39
0
13 Sep 2021
Is Attention Better Than Matrix Decomposition?
Is Attention Better Than Matrix Decomposition?
Zhengyang Geng
Meng-Hao Guo
Hongxu Chen
Xia Li
Ke Wei
Zhouchen Lin
62
137
0
09 Sep 2021
Do Vision Transformers See Like Convolutional Neural Networks?
Do Vision Transformers See Like Convolutional Neural Networks?
M. Raghu
Thomas Unterthiner
Simon Kornblith
Chiyuan Zhang
Alexey Dosovitskiy
ViT
67
925
0
19 Aug 2021
A Low Rank Promoting Prior for Unsupervised Contrastive Learning
A Low Rank Promoting Prior for Unsupervised Contrastive Learning
Yu Wang
Jingyang Lin
Qi Cai
Yingwei Pan
Ting Yao
Hongyang Chao
Tao Mei
SSL
38
16
0
05 Aug 2021
On the Efficacy of Small Self-Supervised Contrastive Models without
  Distillation Signals
On the Efficacy of Small Self-Supervised Contrastive Models without Distillation Signals
Haizhou Shi
Youcai Zhang
Siliang Tang
Wenjie Zhu
Yaqian Li
Yandong Guo
Yueting Zhuang
SyDa
23
14
0
30 Jul 2021
Focal Self-attention for Local-Global Interactions in Vision
  Transformers
Focal Self-attention for Local-Global Interactions in Vision Transformers
Jianwei Yang
Chunyuan Li
Pengchuan Zhang
Xiyang Dai
Bin Xiao
Lu Yuan
Jianfeng Gao
ViT
42
428
0
01 Jul 2021
Early Convolutions Help Transformers See Better
Early Convolutions Help Transformers See Better
Tete Xiao
Mannat Singh
Eric Mintun
Trevor Darrell
Piotr Dollár
Ross B. Girshick
20
753
0
28 Jun 2021
Efficient Self-supervised Vision Transformers for Representation
  Learning
Efficient Self-supervised Vision Transformers for Representation Learning
Chunyuan Li
Jianwei Yang
Pengchuan Zhang
Mei Gao
Bin Xiao
Xiyang Dai
Lu Yuan
Jianfeng Gao
ViT
37
209
0
17 Jun 2021
BEiT: BERT Pre-Training of Image Transformers
BEiT: BERT Pre-Training of Image Transformers
Hangbo Bao
Li Dong
Songhao Piao
Furu Wei
ViT
68
2,749
0
15 Jun 2021
D2C: Diffusion-Denoising Models for Few-shot Conditional Generation
D2C: Diffusion-Denoising Models for Few-shot Conditional Generation
Abhishek Sinha
Jiaming Song
Chenlin Meng
Stefano Ermon
VLM
DiffM
30
118
0
12 Jun 2021
Previous
123...1089
Next