Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2204.07366
Cited By
ResT V2: Simpler, Faster and Stronger
15 April 2022
Qing-Long Zhang
Yubin Yang
ViT
Re-assign community
ArXiv
PDF
HTML
Papers citing
"ResT V2: Simpler, Faster and Stronger"
7 / 7 papers shown
Title
SMAFormer: Synergistic Multi-Attention Transformer for Medical Image Segmentation
Fuchen Zheng
Xuhang Chen
Weihuang Liu
Haolun Li
Yingtie Lei
Jiahui He
Chi-Man Pun
Shounjun Zhou
MedIm
29
12
0
31 Aug 2024
EViT: An Eagle Vision Transformer with Bi-Fovea Self-Attention
Yulong Shi
Mingwei Sun
Yongshuai Wang
Hui Sun
Zengqiang Chen
34
4
0
10 Oct 2023
Masked Autoencoders Are Scalable Vision Learners
Kaiming He
Xinlei Chen
Saining Xie
Yanghao Li
Piotr Dollár
Ross B. Girshick
ViT
TPM
305
7,443
0
11 Nov 2021
Mobile-Former: Bridging MobileNet and Transformer
Yinpeng Chen
Xiyang Dai
Dongdong Chen
Mengchen Liu
Xiaoyi Dong
Lu Yuan
Zicheng Liu
ViT
180
476
0
12 Aug 2021
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions
Wenhai Wang
Enze Xie
Xiang Li
Deng-Ping Fan
Kaitao Song
Ding Liang
Tong Lu
Ping Luo
Ling Shao
ViT
283
3,623
0
24 Feb 2021
Bottleneck Transformers for Visual Recognition
A. Srinivas
Nayeon Lee
Niki Parmar
Jonathon Shlens
Pieter Abbeel
Ashish Vaswani
SLR
290
979
0
27 Jan 2021
Semantic Understanding of Scenes through the ADE20K Dataset
Bolei Zhou
Hang Zhao
Xavier Puig
Tete Xiao
Sanja Fidler
Adela Barriuso
Antonio Torralba
SSeg
253
1,828
0
18 Aug 2016
1