ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2308.08884
  4. Cited By
SRMAE: Masked Image Modeling for Scale-Invariant Deep Representations

SRMAE: Masked Image Modeling for Scale-Invariant Deep Representations

17 August 2023
Zhiming Wang
Lin Gu
Feng Lu
ArXiv (abs)PDFHTML

Papers citing "SRMAE: Masked Image Modeling for Scale-Invariant Deep Representations"

35 / 35 papers shown
Title
CAE v2: Context Autoencoder with CLIP Target
CAE v2: Context Autoencoder with CLIP Target
Xinyu Zhang
Jiahui Chen
Junkun Yuan
Qiang Chen
Jian Wang
...
Jimin Pi
Kun Yao
Junyu Han
Errui Ding
Jingdong Wang
VLMCLIP
94
24
0
17 Nov 2022
Efficient Self-supervised Vision Pretraining with Local Masked Reconstruction
Efficient Self-supervised Vision Pretraining with Local Masked Reconstruction
Jun Chen
Ming Hu
Boyang Albert Li
Mohamed Elhoseiny
130
37
0
01 Jun 2022
Green Hierarchical Vision Transformer for Masked Image Modeling
Green Hierarchical Vision Transformer for Masked Image Modeling
Lang Huang
Shan You
Mingkai Zheng
Fei Wang
Chao Qian
T. Yamasaki
106
72
0
26 May 2022
Uniform Masking: Enabling MAE Pre-training for Pyramid-based Vision
  Transformers with Locality
Uniform Masking: Enabling MAE Pre-training for Pyramid-based Vision Transformers with Locality
Xiang Li
Wenhai Wang
Lingfeng Yang
Jian Yang
170
74
0
20 May 2022
data2vec: A General Framework for Self-supervised Learning in Speech,
  Vision and Language
data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language
Alexei Baevski
Wei-Ning Hsu
Qiantong Xu
Arun Babu
Jiatao Gu
Michael Auli
SSLVLMViT
97
859
0
07 Feb 2022
Masked Feature Prediction for Self-Supervised Visual Pre-Training
Masked Feature Prediction for Self-Supervised Visual Pre-Training
Chen Wei
Haoqi Fan
Saining Xie
Chaoxia Wu
Alan Yuille
Christoph Feichtenhofer
ViT
149
670
0
16 Dec 2021
PeCo: Perceptual Codebook for BERT Pre-training of Vision Transformers
PeCo: Perceptual Codebook for BERT Pre-training of Vision Transformers
Xiaoyi Dong
Jianmin Bao
Ting Zhang
Dongdong Chen
Weiming Zhang
Lu Yuan
Dong Chen
Fang Wen
Nenghai Yu
Baining Guo
ViT
117
244
0
24 Nov 2021
iBOT: Image BERT Pre-Training with Online Tokenizer
iBOT: Image BERT Pre-Training with Online Tokenizer
Jinghao Zhou
Chen Wei
Huiyu Wang
Wei Shen
Cihang Xie
Alan Yuille
Tao Kong
85
740
0
15 Nov 2021
Masked Autoencoders Are Scalable Vision Learners
Masked Autoencoders Are Scalable Vision Learners
Kaiming He
Xinlei Chen
Saining Xie
Yanghao Li
Piotr Dollár
Ross B. Girshick
ViTTPM
467
7,814
0
11 Nov 2021
Multi-Scale Aligned Distillation for Low-Resolution Detection
Multi-Scale Aligned Distillation for Low-Resolution Detection
Lu Qi
Jason Kuen
Jiuxiang Gu
Zhe Lin
Yi Wang
Yukang Chen
Yanwei Li
Jiaya Jia
61
53
0
14 Sep 2021
SwinIR: Image Restoration Using Swin Transformer
SwinIR: Image Restoration Using Swin Transformer
Christos Sakaridis
Jie Cao
Guolei Sun
Peng Sun
Luc Van Gool
Radu Timofte
ViT
193
2,945
0
23 Aug 2021
BEiT: BERT Pre-Training of Image Transformers
BEiT: BERT Pre-Training of Image Transformers
Hangbo Bao
Li Dong
Songhao Piao
Furu Wei
ViT
286
2,841
0
15 Jun 2021
Emerging Properties in Self-Supervised Vision Transformers
Emerging Properties in Self-Supervised Vision Transformers
Mathilde Caron
Hugo Touvron
Ishan Misra
Hervé Jégou
Julien Mairal
Piotr Bojanowski
Armand Joulin
706
6,121
0
29 Apr 2021
An Empirical Study of Training Self-Supervised Vision Transformers
An Empirical Study of Training Self-Supervised Vision Transformers
Xinlei Chen
Saining Xie
Kaiming He
ViT
157
1,868
0
05 Apr 2021
Student Network Learning via Evolutionary Knowledge Distillation
Student Network Learning via Evolutionary Knowledge Distillation
Kangkai Zhang
Chunhui Zhang
Shikun Li
Dan Zeng
Shiming Ge
56
85
0
23 Mar 2021
Zero-Shot Text-to-Image Generation
Zero-Shot Text-to-Image Generation
Aditya A. Ramesh
Mikhail Pavlov
Gabriel Goh
Scott Gray
Chelsea Voss
Alec Radford
Mark Chen
Ilya Sutskever
VLM
418
4,987
0
24 Feb 2021
Pre-Trained Image Processing Transformer
Pre-Trained Image Processing Transformer
Hanting Chen
Yunhe Wang
Tianyu Guo
Chang Xu
Yiping Deng
Zhenhua Liu
Siwei Ma
Chunjing Xu
Chao Xu
Wen Gao
VLMViT
143
1,677
0
01 Dec 2020
Densely Guided Knowledge Distillation using Multiple Teacher Assistants
Densely Guided Knowledge Distillation using Multiple Teacher Assistants
Wonchul Son
Jaemin Na
Junyong Choi
Wonjun Hwang
71
118
0
18 Sep 2020
Robust Re-Identification by Multiple Views Knowledge Distillation
Robust Re-Identification by Multiple Views Knowledge Distillation
Angelo Porrello
Luca Bergamini
Simone Calderara
67
67
0
08 Jul 2020
Cross-Resolution Learning for Face Recognition
Cross-Resolution Learning for Face Recognition
F. V. Massoli
Giuseppe Amato
Fabrizio Falchi
SupRCVBM
72
70
0
05 Dec 2019
Exploring Factors for Improving Low Resolution Face Recognition
Exploring Factors for Improving Low Resolution Face Recognition
O. A. Aghdam
Behzad Bozorgtabar
H. K. Ekenel
Jean-Philippe Thiran
CVBM
36
24
0
23 Jul 2019
Similarity-Preserving Knowledge Distillation
Similarity-Preserving Knowledge Distillation
Frederick Tung
Greg Mori
124
979
0
23 Jul 2019
Relational Knowledge Distillation
Relational Knowledge Distillation
Wonpyo Park
Dongju Kim
Yan Lu
Minsu Cho
74
1,423
0
10 Apr 2019
A Comprehensive Overhaul of Feature Distillation
A Comprehensive Overhaul of Feature Distillation
Byeongho Heo
Jeesoo Kim
Sangdoo Yun
Hyojin Park
Nojun Kwak
J. Choi
86
584
0
03 Apr 2019
Correlation Congruence for Knowledge Distillation
Correlation Congruence for Knowledge Distillation
Baoyun Peng
Xiao Jin
Jiaheng Liu
Shunfeng Zhou
Yichao Wu
Yu Liu
Dongsheng Li
Zhaoning Zhang
92
513
0
03 Apr 2019
Image Super-Resolution Using Very Deep Residual Channel Attention
  Networks
Image Super-Resolution Using Very Deep Residual Channel Attention Networks
Yulun Zhang
Kunpeng Li
Kai Li
Lichen Wang
Bineng Zhong
Y. Fu
SupR
103
4,331
0
08 Jul 2018
Why do deep convolutional networks generalize so poorly to small image
  transformations?
Why do deep convolutional networks generalize so poorly to small image transformations?
Aharon Azulay
Yair Weiss
80
561
0
30 May 2018
Face hallucination using cascaded super-resolution and identity priors
Face hallucination using cascaded super-resolution and identity priors
Klemen Grm
Simon Dobrišek
Walter J. Scheirer
Vitomir Štruc
SupR
72
94
0
28 May 2018
Learning Deep Representations with Probabilistic Knowledge Transfer
Learning Deep Representations with Probabilistic Knowledge Transfer
Nikolaos Passalis
Anastasios Tefas
61
412
0
28 Mar 2018
Fast and Accurate Single Image Super-Resolution via Information
  Distillation Network
Fast and Accurate Single Image Super-Resolution via Information Distillation Network
Zheng Hui
Xiumei Wang
Xinbo Gao
SupR
58
727
0
26 Mar 2018
Enhanced Deep Residual Networks for Single Image Super-Resolution
Enhanced Deep Residual Networks for Single Image Super-Resolution
Bee Lim
Sanghyun Son
Heewon Kim
Seungjun Nah
Kyoung Mu Lee
SupR
182
5,921
0
10 Jul 2017
Paying More Attention to Attention: Improving the Performance of
  Convolutional Neural Networks via Attention Transfer
Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer
Sergey Zagoruyko
N. Komodakis
147
2,586
0
12 Dec 2016
Context Encoders: Feature Learning by Inpainting
Context Encoders: Feature Learning by Inpainting
Deepak Pathak
Philipp Krahenbuhl
Jeff Donahue
Trevor Darrell
Alexei A. Efros
SSL
69
5,297
0
25 Apr 2016
Image Super-Resolution Using Deep Convolutional Networks
Image Super-Resolution Using Deep Convolutional Networks
Chao Dong
Chen Change Loy
Kaiming He
Xiaoou Tang
SupR
155
8,091
0
31 Dec 2014
FitNets: Hints for Thin Deep Nets
FitNets: Hints for Thin Deep Nets
Adriana Romero
Nicolas Ballas
Samira Ebrahimi Kahou
Antoine Chassang
C. Gatta
Yoshua Bengio
FedML
314
3,898
0
19 Dec 2014
1