ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2104.14294
  4. Cited By
Emerging Properties in Self-Supervised Vision Transformers
v1v2 (latest)

Emerging Properties in Self-Supervised Vision Transformers

29 April 2021
Mathilde Caron
Hugo Touvron
Ishan Misra
Hervé Jégou
Julien Mairal
Piotr Bojanowski
Armand Joulin
ArXiv (abs)PDFHTML

Papers citing "Emerging Properties in Self-Supervised Vision Transformers"

50 / 4,175 papers shown
Title
OpenCoS: Contrastive Semi-supervised Learning for Handling Open-set
  Unlabeled Data
OpenCoS: Contrastive Semi-supervised Learning for Handling Open-set Unlabeled Data
Jongjin Park
Sukmin Yun
Jongheon Jeong
Jinwoo Shin
80
29
0
29 Jun 2021
OffRoadTranSeg: Semi-Supervised Segmentation using Transformers on
  OffRoad environments
OffRoadTranSeg: Semi-Supervised Segmentation using Transformers on OffRoad environments
Anukriti Singh
Kartikeya Singh
P. B. Sujit
ViT
62
8
0
26 Jun 2021
IA-RED$^2$: Interpretability-Aware Redundancy Reduction for Vision
  Transformers
IA-RED2^22: Interpretability-Aware Redundancy Reduction for Vision Transformers
Bowen Pan
Yikang Shen
Yi Ding
Zhangyang Wang
Rogerio Feris
A. Oliva
VLMViT
126
165
0
23 Jun 2021
Estimating the Robustness of Classification Models by the Structure of
  the Learned Feature-Space
Estimating the Robustness of Classification Models by the Structure of the Learned Feature-Space
Kalun Ho
Franz-Josef Pfreundt
J. Keuper
Margret Keuper
OODUQCV
50
3
0
23 Jun 2021
Credal Self-Supervised Learning
Credal Self-Supervised Learning
Julian Lienen
Eyke Hüllermeier
SSL
57
21
0
22 Jun 2021
Encoder-Decoder Architectures for Clinically Relevant Coronary Artery
  Segmentation
Encoder-Decoder Architectures for Clinically Relevant Coronary Artery Segmentation
Joao Lourencco Silva
M. Menezes
T. Rodrigues
B. Silva
F. Pinto
Arlindo L. Oliveira
MedIm
130
17
0
21 Jun 2021
How to train your ViT? Data, Augmentation, and Regularization in Vision
  Transformers
How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers
Andreas Steiner
Alexander Kolesnikov
Xiaohua Zhai
Ross Wightman
Jakob Uszkoreit
Lucas Beyer
ViT
139
638
0
18 Jun 2021
Efficient Self-supervised Vision Transformers for Representation
  Learning
Efficient Self-supervised Vision Transformers for Representation Learning
Chunyuan Li
Jianwei Yang
Pengchuan Zhang
Mei Gao
Bin Xiao
Xiyang Dai
Lu Yuan
Jianfeng Gao
ViT
108
214
0
17 Jun 2021
Visual Correspondence Hallucination
Visual Correspondence Hallucination
Hugo Germain
Vincent Lepetit
Guillaume Bourmaud
81
11
0
17 Jun 2021
XCiT: Cross-Covariance Image Transformers
XCiT: Cross-Covariance Image Transformers
Alaaeldin El-Nouby
Hugo Touvron
Mathilde Caron
Piotr Bojanowski
Matthijs Douze
...
Ivan Laptev
Natalia Neverova
Gabriel Synnaeve
Jakob Verbeek
Hervé Jégou
ViT
151
516
0
17 Jun 2021
The 2021 Image Similarity Dataset and Challenge
The 2021 Image Similarity Dataset and Challenge
Matthijs Douze
Giorgos Tolias
Ed Pizzi
Zoe Papakipos
L. Chanussot
...
Maxim Maximov
Laura Leal-Taixé
Ismail Elezi
Ondřej Chum
Cristian Canton Ferrer
63
63
0
17 Jun 2021
Long-Short Temporal Contrastive Learning of Video Transformers
Long-Short Temporal Contrastive Learning of Video Transformers
Jue Wang
Gedas Bertasius
Du Tran
Lorenzo Torresani
VLMViT
151
50
0
17 Jun 2021
BEiT: BERT Pre-Training of Image Transformers
BEiT: BERT Pre-Training of Image Transformers
Hangbo Bao
Li Dong
Songhao Piao
Furu Wei
ViT
328
2,852
0
15 Jun 2021
Dynamic Distillation Network for Cross-Domain Few-Shot Recognition with
  Unlabeled Data
Dynamic Distillation Network for Cross-Domain Few-Shot Recognition with Unlabeled Data
Ashraful Islam
Chun-Fu Chen
Yikang Shen
Leonid Karlinsky
Rogerio Feris
Richard J. Radke
133
85
0
14 Jun 2021
Revisiting Model Stitching to Compare Neural Representations
Revisiting Model Stitching to Compare Neural Representations
Yamini Bansal
Preetum Nakkiran
Boaz Barak
FedML
114
121
0
14 Jun 2021
Delving Deep into the Generalization of Vision Transformers under
  Distribution Shifts
Delving Deep into the Generalization of Vision Transformers under Distribution Shifts
Chongzhi Zhang
Mingyuan Zhang
Shanghang Zhang
Daisheng Jin
Qiang-feng Zhou
Zhongang Cai
Haiyu Zhao
Xianglong Liu
Ziwei Liu
69
105
0
14 Jun 2021
Rethinking Architecture Design for Tackling Data Heterogeneity in
  Federated Learning
Rethinking Architecture Design for Tackling Data Heterogeneity in Federated Learning
Liangqiong Qu
Yuyin Zhou
Paul Pu Liang
Yingda Xia
Feifei Wang
Ehsan Adeli
L. Fei-Fei
D. Rubin
FedMLAI4CE
109
186
0
10 Jun 2021
Transformed CNNs: recasting pre-trained convolutional layers with
  self-attention
Transformed CNNs: recasting pre-trained convolutional layers with self-attention
Stéphane dÁscoli
Levent Sagun
Giulio Biroli
Ari S. Morcos
ViT
56
6
0
10 Jun 2021
MST: Masked Self-Supervised Transformer for Visual Representation
MST: Masked Self-Supervised Transformer for Visual Representation
Zhaowen Li
Zhiyang Chen
Fan Yang
Wei Li
Yousong Zhu
...
Rui Deng
Liwei Wu
Rui Zhao
Ming Tang
Jinqiao Wang
ViT
89
168
0
10 Jun 2021
Keeping Your Eye on the Ball: Trajectory Attention in Video Transformers
Keeping Your Eye on the Ball: Trajectory Attention in Video Transformers
Mandela Patrick
Dylan Campbell
Yuki M. Asano
Ishan Misra
Ishan Misra Florian Metze
Christoph Feichtenhofer
Andrea Vedaldi
João F. Henriques
108
282
0
09 Jun 2021
Pretrained Encoders are All You Need
Pretrained Encoders are All You Need
Mina Khan
P. Srivatsa
Advait Rane
Shriram Chenniappa
Rishabh Anand
Sherjil Ozair
Pattie Maes
SSLVLM
81
6
0
09 Jun 2021
Scaling Vision Transformers
Scaling Vision Transformers
Xiaohua Zhai
Alexander Kolesnikov
N. Houlsby
Lucas Beyer
ViT
157
1,098
0
08 Jun 2021
DETReg: Unsupervised Pretraining with Region Priors for Object Detection
DETReg: Unsupervised Pretraining with Region Priors for Object Detection
Amir Bar
Xin Wang
Vadim Kantorov
Colorado Reed
Roei Herzig
Gal Chechik
Anna Rohrbach
Trevor Darrell
Amir Globerson
ViT
79
118
0
08 Jun 2021
Interpretable agent communication from scratch (with a generic visual
  processor emerging on the side)
Interpretable agent communication from scratch (with a generic visual processor emerging on the side)
Roberto Dessì
Eugene Kharitonov
Marco Baroni
93
28
0
08 Jun 2021
On Improving Adversarial Transferability of Vision Transformers
On Improving Adversarial Transferability of Vision Transformers
Muzammal Naseer
Kanchana Ranasinghe
Salman Khan
Fahad Shahbaz Khan
Fatih Porikli
ViT
101
95
0
08 Jun 2021
SIMONe: View-Invariant, Temporally-Abstracted Object Representations via
  Unsupervised Video Decomposition
SIMONe: View-Invariant, Temporally-Abstracted Object Representations via Unsupervised Video Decomposition
Rishabh Kabra
Daniel Zoran
Goker Erdogan
Loic Matthey
Antonia Creswell
M. Botvinick
Alexander Lerchner
Christopher P. Burgess
OCL
123
79
0
07 Jun 2021
Efficient Training of Visual Transformers with Small Datasets
Efficient Training of Visual Transformers with Small Datasets
Yahui Liu
E. Sangineto
Wei Bi
N. Sebe
Bruno Lepri
Marco De Nadai
ViT
75
174
0
07 Jun 2021
Exploring the Limits of Out-of-Distribution Detection
Exploring the Limits of Out-of-Distribution Detection
Stanislav Fort
Jie Jessie Ren
Balaji Lakshminarayanan
103
342
0
06 Jun 2021
Points2Polygons: Context-Based Segmentation from Weak Labels Using
  Adversarial Networks
Points2Polygons: Context-Based Segmentation from Weak Labels Using Adversarial Networks
K. Yu
Hakeem Frank
Daniel Wilson
60
1
0
05 Jun 2021
Aligning Pretraining for Detection via Object-Level Contrastive Learning
Aligning Pretraining for Detection via Object-Level Contrastive Learning
Fangyun Wei
Yue Gao
Zhirong Wu
Han Hu
Stephen Lin
ObjD
70
148
0
04 Jun 2021
CATs: Cost Aggregation Transformers for Visual Correspondence
CATs: Cost Aggregation Transformers for Visual Correspondence
Seokju Cho
Sunghwan Hong
Sangryul Jeon
Yunsung Lee
Kwanghoon Sohn
Seungryong Kim
ViT
81
92
0
04 Jun 2021
Semantic-Aware Contrastive Learning for Multi-object Medical Image
  Segmentation
Semantic-Aware Contrastive Learning for Multi-object Medical Image Segmentation
Ho Hin Lee
Yucheng Tang
Qi Yang
Xin Yu
Shunxing Bao
L. Cai
Lucas W. Remedios
Bennett A. Landman
Yuankai Huo
56
8
0
03 Jun 2021
When Vision Transformers Outperform ResNets without Pre-training or
  Strong Data Augmentations
When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations
Xiangning Chen
Cho-Jui Hsieh
Boqing Gong
ViT
108
330
0
03 Jun 2021
Container: Context Aggregation Network
Container: Context Aggregation Network
Peng Gao
Jiasen Lu
Hongsheng Li
Roozbeh Mottaghi
Aniruddha Kembhavi
ViT
106
72
0
02 Jun 2021
You Only Look at One Sequence: Rethinking Transformer in Vision through
  Object Detection
You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection
Yuxin Fang
Bencheng Liao
Xinggang Wang
Jiemin Fang
Jiyang Qi
Rui Wu
Jianwei Niu
Wenyu Liu
ViT
73
326
0
01 Jun 2021
Exploring the Diversity and Invariance in Yourself for Visual
  Pre-Training Task
Exploring the Diversity and Invariance in Yourself for Visual Pre-Training Task
Longhui Wei
Lingxi Xie
Wen-gang Zhou
Houqiang Li
Qi Tian
SSL
72
3
0
01 Jun 2021
Active Learning in Bayesian Neural Networks with Balanced Entropy
  Learning Principle
Active Learning in Bayesian Neural Networks with Balanced Entropy Learning Principle
J. Woo
108
11
0
30 May 2021
Transformer-Based Source-Free Domain Adaptation
Transformer-Based Source-Free Domain Adaptation
Guanglei Yang
Hao Tang
Zhun Zhong
M. Ding
Ling Shao
N. Sebe
Elisa Ricci
ViT
82
42
0
28 May 2021
KVT: k-NN Attention for Boosting Vision Transformers
KVT: k-NN Attention for Boosting Vision Transformers
Pichao Wang
Xue Wang
F. Wang
Ming Lin
Shuning Chang
Hao Li
Rong Jin
ViT
131
107
0
28 May 2021
GraphVICRegHSIC: Towards improved self-supervised representation
  learning for graphs with a hyrbid loss function
GraphVICRegHSIC: Towards improved self-supervised representation learning for graphs with a hyrbid loss function
Sayan Nag
SSL
48
0
0
25 May 2021
Unsupervised Visual Representation Learning by Online Constrained
  K-Means
Unsupervised Visual Representation Learning by Online Constrained K-Means
Qi Qian
Yuanhong Xu
Juhua Hu
Hao Li
Rong Jin
CMLSSL
77
36
0
24 May 2021
Intriguing Properties of Vision Transformers
Intriguing Properties of Vision Transformers
Muzammal Naseer
Kanchana Ranasinghe
Salman Khan
Munawar Hayat
Fahad Shahbaz Khan
Ming-Hsuan Yang
ViT
346
654
0
21 May 2021
Vision Transformers are Robust Learners
Vision Transformers are Robust Learners
Sayak Paul
Pin-Yu Chen
ViT
77
312
0
17 May 2021
Semi-Supervised Classification and Segmentation on High Resolution
  Aerial Images
Semi-Supervised Classification and Segmentation on High Resolution Aerial Images
Sahil Khose
Abhiraj Tiwari
Ankita Ghosh
38
10
0
16 May 2021
Using Self-Supervised Auxiliary Tasks to Improve Fine-Grained Facial
  Representation
Using Self-Supervised Auxiliary Tasks to Improve Fine-Grained Facial Representation
Mahdi Pourmirzaei
G. Montazer
Farzaneh Esmaili
CVBM
66
28
0
13 May 2021
Breaking Shortcut: Exploring Fully Convolutional Cycle-Consistency for
  Video Correspondence Learning
Breaking Shortcut: Exploring Fully Convolutional Cycle-Consistency for Video Correspondence Learning
Yansong Tang
Zhenyu Jiang
Zhenda Xie
Yue Cao
Zheng Zhang
Philip Torr
Han Hu
113
6
0
12 May 2021
When Does Contrastive Visual Representation Learning Work?
When Does Contrastive Visual Representation Learning Work?
Elijah Cole
Xuan S. Yang
Kimberly Wilber
Oisin Mac Aodha
Serge Belongie
SSL
90
125
0
12 May 2021
Self-Supervised Learning with Swin Transformers
Self-Supervised Learning with Swin Transformers
Zhenda Xie
Yutong Lin
Zhuliang Yao
Zheng Zhang
Qi Dai
Yue Cao
Han Hu
ViT
87
183
0
10 May 2021
Contrastive Attraction and Contrastive Repulsion for Representation
  Learning
Contrastive Attraction and Contrastive Repulsion for Representation Learning
Huangjie Zheng
Xu Chen
Jiangchao Yao
Hongxia Yang
Chunyuan Li
Ya Zhang
Hao Zhang
Ivor Tsang
Jingren Zhou
Mingyuan Zhou
SSL
113
12
0
08 May 2021
ResMLP: Feedforward networks for image classification with
  data-efficient training
ResMLP: Feedforward networks for image classification with data-efficient training
Hugo Touvron
Piotr Bojanowski
Mathilde Caron
Matthieu Cord
Alaaeldin El-Nouby
...
Gautier Izacard
Armand Joulin
Gabriel Synnaeve
Jakob Verbeek
Hervé Jégou
VLM
84
674
0
07 May 2021
Previous
123...828384
Next