Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2202.03670
Cited By
How to Understand Masked Autoencoders
8 February 2022
Shuhao Cao
Peng-Tao Xu
David A. Clifton
Re-assign community
ArXiv
PDF
HTML
Papers citing
"How to Understand Masked Autoencoders"
32 / 32 papers shown
Title
KDC-MAE: Knowledge Distilled Contrastive Mask Auto-Encoder
Maheswar Bora
Saurabh Atreya
Aritra Mukherjee
Abhijit Das
83
0
0
19 Nov 2024
How Molecules Impact Cells: Unlocking Contrastive PhenoMolecular Retrieval
Philip Fradkin
Puria Azadi
Karush Suri
Frederik Wenkel
A. Bashashati
Maciej Sypetkowski
Dominique Beaini
33
1
0
10 Sep 2024
On the Role of Discrete Tokenization in Visual Representation Learning
Tianqi Du
Yifei Wang
Yisen Wang
42
7
0
12 Jul 2024
Emerging Property of Masked Token for Effective Pre-training
Hyesong Choi
Hunsang Lee
Seyoung Joung
Hyejin Park
Jiyeong Kim
Dongbo Min
34
9
0
12 Apr 2024
Salience-Based Adaptive Masking: Revisiting Token Dynamics for Enhanced Pre-training
Hyesong Choi
Hyejin Park
Kwang Moo Yi
Sungmin Cha
Dongbo Min
34
9
0
12 Apr 2024
Masked Autoencoders are PDE Learners
Anthony Y. Zhou
A. Farimani
AI4CE
38
5
0
26 Mar 2024
Deep Tensor Network
Yifan Zhang
16
0
0
18 Nov 2023
Understanding Masked Autoencoders From a Local Contrastive Perspective
Xiaoyu Yue
Lei Bai
Meng Wei
Jiangmiao Pang
Xihui Liu
Luping Zhou
Wanli Ouyang
SSL
59
4
0
03 Oct 2023
Information Flow in Self-Supervised Learning
Zhiyuan Tan
Jingqin Yang
Weiran Huang
Yang Yuan
Yifan Zhang
SSL
25
13
0
29 Sep 2023
BiLMa: Bidirectional Local-Matching for Text-based Person Re-identification
T. Fujii
Shuhei Tarashima
36
8
0
09 Sep 2023
Understanding Masked Autoencoders via Hierarchical Latent Variable Models
Lingjing Kong
Martin Q. Ma
Guan-Hong Chen
Eric P. Xing
Yuejie Chi
Louis-Philippe Morency
Kun Zhang
17
30
0
08 Jun 2023
Scalable Transformer for PDE Surrogate Modeling
Zijie Li
Dule Shu
A. Farimani
24
64
0
27 May 2023
Hint-Aug: Drawing Hints from Foundation Vision Transformers Towards Boosted Few-Shot Parameter-Efficient Tuning
Zhongzhi Yu
Shang Wu
Y. Fu
Shunyao Zhang
Yingyan Lin
25
6
0
25 Apr 2023
Image Deblurring by Exploring In-depth Properties of Transformer
Pengwei Liang
Junjun Jiang
Xianming Liu
Jiayi Ma
ViT
32
20
0
24 Mar 2023
CSSL-MHTR: Continual Self-Supervised Learning for Scalable Multi-script Handwritten Text Recognition
M. Dhiaf
Mohamed Ali Souibgui
Kai Wang
Yuyang Liu
Yousri Kessentini
Alicia Fornés
Ahmed Cheikh Rouhou
CLL
21
2
0
16 Mar 2023
Masked Image Modeling with Local Multi-Scale Reconstruction
Haoqing Wang
Yehui Tang
Yunhe Wang
Jianyuan Guo
Zhiwei Deng
Kai Han
56
46
0
09 Mar 2023
Hiding Data Helps: On the Benefits of Masking for Sparse Coding
Muthuraman Chidambaram
Chenwei Wu
Yu Cheng
Rong Ge
10
0
0
24 Feb 2023
Continuous Spatiotemporal Transformers
Antonio H. O. Fonseca
E. Zappala
J. O. Caro
David van Dijk
18
7
0
31 Jan 2023
Aerial Image Object Detection With Vision Transformer Detector (ViTDet)
Liya Wang
A. Tien
37
7
0
28 Jan 2023
Understanding Self-Supervised Pretraining with Part-Aware Representation Learning
Jie Zhu
Jiyang Qi
Mingyu Ding
Xiaokang Chen
Ping Luo
Xinggang Wang
Wenyu Liu
Leye Wang
Jingdong Wang
SSL
25
8
0
27 Jan 2023
Teaching Matters: Investigating the Role of Supervision in Vision Transformers
Matthew Walmer
Saksham Suri
Kamal Gupta
Abhinav Shrivastava
30
33
0
07 Dec 2022
Seeing Beyond the Brain: Conditional Diffusion Model with Sparse Masked Modeling for Vision Decoding
Zijiao Chen
Jiaxin Qing
Tiange Xiang
Wan Lin Yue
J. Zhou
DiffM
MedIm
25
146
0
13 Nov 2022
How Mask Matters: Towards Theoretical Understandings of Masked Autoencoders
Qi Zhang
Yifei Wang
Yisen Wang
12
72
0
15 Oct 2022
Understanding Masked Image Modeling via Learning Occlusion Invariant Feature
Xiangwen Kong
Xiangyu Zhang
SSL
19
53
0
08 Aug 2022
A Survey on Masked Autoencoder for Self-supervised Learning in Vision and Beyond
Chaoning Zhang
Chenshuang Zhang
Junha Song
John Seon Keun Yi
Kang Zhang
In So Kweon
SSL
52
71
0
30 Jul 2022
Multimodal Learning with Transformers: A Survey
P. Xu
Xiatian Zhu
David A. Clifton
ViT
41
525
0
13 Jun 2022
Towards Understanding Why Mask-Reconstruction Pretraining Helps in Downstream Tasks
Jia-Yu Pan
Pan Zhou
Shuicheng Yan
SSL
24
15
0
08 Jun 2022
Domain Invariant Masked Autoencoders for Self-supervised Learning from Multi-domains
Haiyang Yang
Meilin Chen
Yizhou Wang
Shixiang Tang
Feng Zhu
Lei Bai
Rui Zhao
Wanli Ouyang
16
16
0
10 May 2022
Masked Spectrogram Modeling using Masked Autoencoders for Learning General-purpose Audio Representation
Daisuke Niizumi
Daiki Takeuchi
Yasunori Ohishi
N. Harada
K. Kashino
24
65
0
26 Apr 2022
Masked Autoencoders Are Scalable Vision Learners
Kaiming He
Xinlei Chen
Saining Xie
Yanghao Li
Piotr Dollár
Ross B. Girshick
ViT
TPM
296
7,434
0
11 Nov 2021
Emerging Properties in Self-Supervised Vision Transformers
Mathilde Caron
Hugo Touvron
Ishan Misra
Hervé Jégou
Julien Mairal
Piotr Bojanowski
Armand Joulin
303
5,773
0
29 Apr 2021
Transformer in Transformer
Kai Han
An Xiao
Enhua Wu
Jianyuan Guo
Chunjing Xu
Yunhe Wang
ViT
282
1,523
0
27 Feb 2021
1