ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2101.11605
  4. Cited By
Bottleneck Transformers for Visual Recognition

Bottleneck Transformers for Visual Recognition

27 January 2021
A. Srinivas
Tsung-Yi Lin
Niki Parmar
Jonathon Shlens
Pieter Abbeel
Ashish Vaswani
    SLR
ArXivPDFHTML

Papers citing "Bottleneck Transformers for Visual Recognition"

50 / 341 papers shown
Title
GT U-Net: A U-Net Like Group Transformer Network for Tooth Root
  Segmentation
GT U-Net: A U-Net Like Group Transformer Network for Tooth Root Segmentation
Yunxiang Li
Shuai Wang
Jun Wang
G. Zeng
Wenjun Liu
Qianni Zhang
Qun Jin
Yaqi Wang
ViT
MedIm
23
47
0
30 Sep 2021
UFO-ViT: High Performance Linear Vision Transformer without Softmax
UFO-ViT: High Performance Linear Vision Transformer without Softmax
Jeonggeun Song
ViT
106
20
0
29 Sep 2021
A hierarchical residual network with compact triplet-center loss for
  sketch recognition
A hierarchical residual network with compact triplet-center loss for sketch recognition
Lei Wang
Shihui Zhang
Huang He
Xiaoxia Zhang
Y. Sang
17
2
0
28 Sep 2021
DS-Net++: Dynamic Weight Slicing for Efficient Inference in CNNs and
  Transformers
DS-Net++: Dynamic Weight Slicing for Efficient Inference in CNNs and Transformers
Changlin Li
Guangrun Wang
Bing Wang
Xiaodan Liang
Zhihui Li
Xiaojun Chang
25
9
0
21 Sep 2021
UNetFormer: A UNet-like Transformer for Efficient Semantic Segmentation
  of Remote Sensing Urban Scene Imagery
UNetFormer: A UNet-like Transformer for Efficient Semantic Segmentation of Remote Sensing Urban Scene Imagery
Libo Wang
Rui Li
Ce Zhang
Shenghui Fang
Chenxi Duan
Xiaoliang Meng
P. M. Atkinson
ViT
38
623
0
18 Sep 2021
Compute and Energy Consumption Trends in Deep Learning Inference
Compute and Energy Consumption Trends in Deep Learning Inference
Radosvet Desislavov
Fernando Martínez-Plumed
José Hernández Orallo
29
113
0
12 Sep 2021
Encoder-decoder with Multi-level Attention for 3D Human Shape and Pose
  Estimation
Encoder-decoder with Multi-level Attention for 3D Human Shape and Pose Estimation
Ziniu Wan
Zhengjia Li
Maoqing Tian
Jianbo Liu
Shuai Yi
Hongsheng Li
3DH
27
80
0
06 Sep 2021
Searching for Efficient Multi-Stage Vision Transformers
Searching for Efficient Multi-Stage Vision Transformers
Yi-Lun Liao
S. Karaman
Vivienne Sze
ViT
16
19
0
01 Sep 2021
Efficient conformer: Progressive downsampling and grouped attention for
  automatic speech recognition
Efficient conformer: Progressive downsampling and grouped attention for automatic speech recognition
Maxime Burchi
Valentin Vielzeuf
26
84
0
31 Aug 2021
Discovering Spatial Relationships by Transformers for Domain
  Generalization
Discovering Spatial Relationships by Transformers for Domain Generalization
Cuicui Kang
Karthik Nandakumar
ViT
16
0
0
23 Aug 2021
Mobile-Former: Bridging MobileNet and Transformer
Mobile-Former: Bridging MobileNet and Transformer
Yinpeng Chen
Xiyang Dai
Dongdong Chen
Mengchen Liu
Xiaoyi Dong
Lu Yuan
Zicheng Liu
ViT
174
476
0
12 Aug 2021
Learning Fair Face Representation With Progressive Cross Transformer
Learning Fair Face Representation With Progressive Cross Transformer
Yong Li
Yufei Sun
Zhen Cui
Shiguang Shan
Jian Yang
19
12
0
11 Aug 2021
CrossFormer: A Versatile Vision Transformer Hinging on Cross-scale
  Attention
CrossFormer: A Versatile Vision Transformer Hinging on Cross-scale Attention
Wenxiao Wang
Lulian Yao
Long Chen
Binbin Lin
Deng Cai
Xiaofei He
Wei Liu
32
256
0
31 Jul 2021
DPT: Deformable Patch-based Transformer for Visual Recognition
DPT: Deformable Patch-based Transformer for Visual Recognition
Zhiyang Chen
Yousong Zhu
Chaoyang Zhao
Guosheng Hu
Wei Zeng
Jinqiao Wang
Ming Tang
ViT
14
98
0
30 Jul 2021
Rethinking and Improving Relative Position Encoding for Vision
  Transformer
Rethinking and Improving Relative Position Encoding for Vision Transformer
Kan Wu
Houwen Peng
Minghao Chen
Jianlong Fu
Hongyang Chao
ViT
42
328
0
29 Jul 2021
Contextual Transformer Networks for Visual Recognition
Contextual Transformer Networks for Visual Recognition
Yehao Li
Ting Yao
Yingwei Pan
Tao Mei
ViT
16
468
0
26 Jul 2021
Query2Label: A Simple Transformer Way to Multi-Label Classification
Query2Label: A Simple Transformer Way to Multi-Label Classification
Shilong Liu
Lei Zhang
Xiao Yang
Hang Su
Jun Zhu
8
187
0
22 Jul 2021
EAN: Event Adaptive Network for Enhanced Action Recognition
EAN: Event Adaptive Network for Enhanced Action Recognition
Yuan Tian
Yichao Yan
Guangtao Zhai
G. Guo
Zhiyong Gao
27
41
0
22 Jul 2021
CycleMLP: A MLP-like Architecture for Dense Prediction
CycleMLP: A MLP-like Architecture for Dense Prediction
Shoufa Chen
Enze Xie
Chongjian Ge
Runjian Chen
Ding Liang
Ping Luo
19
231
0
21 Jul 2021
A Comparative Study of Deep Learning Classification Methods on a Small
  Environmental Microorganism Image Dataset (EMDS-6): from Convolutional Neural
  Networks to Visual Transformers
A Comparative Study of Deep Learning Classification Methods on a Small Environmental Microorganism Image Dataset (EMDS-6): from Convolutional Neural Networks to Visual Transformers
Penghui Zhao
Chen Li
M. Rahaman
Hao Xu
Hechen Yang
Hongzan Sun
Tao Jiang
M. Grzegorzek
VLM
22
39
0
16 Jul 2021
Visual Parser: Representing Part-whole Hierarchies with Transformers
Visual Parser: Representing Part-whole Hierarchies with Transformers
Shuyang Sun
Xiaoyu Yue
S. Bai
Philip H. S. Torr
50
27
0
13 Jul 2021
LANA: Latency Aware Network Acceleration
LANA: Latency Aware Network Acceleration
Pavlo Molchanov
Jimmy Hall
Hongxu Yin
Jan Kautz
Nicolò Fusi
Arash Vahdat
15
11
0
12 Jul 2021
Locally Enhanced Self-Attention: Combining Self-Attention and
  Convolution as Local and Context Terms
Locally Enhanced Self-Attention: Combining Self-Attention and Convolution as Local and Context Terms
Chenglin Yang
Siyuan Qiao
Adam Kortylewski
Alan Yuille
20
4
0
12 Jul 2021
U-Net with Hierarchical Bottleneck Attention for Landmark Detection in
  Fundus Images of the Degenerated Retina
U-Net with Hierarchical Bottleneck Attention for Landmark Detection in Fundus Images of the Degenerated Retina
Shuyun Tang
Z. Qi
Jacob Granley
M. Beyeler
6
9
0
09 Jul 2021
GLiT: Neural Architecture Search for Global and Local Image Transformer
GLiT: Neural Architecture Search for Global and Local Image Transformer
Boyu Chen
Peixia Li
Chuming Li
Baopu Li
Lei Bai
Chen Lin
Ming-hui Sun
Junjie Yan
Wanli Ouyang
ViT
24
85
0
07 Jul 2021
Gaze Estimation with an Ensemble of Four Architectures
Gaze Estimation with an Ensemble of Four Architectures
Xin Cai
Boyu Chen
Jiabei Zeng
Jiajun Zhang
Yunjia Sun
X. Wang
Zhilong Ji
Xiao-Chang Liu
Xilin Chen
Shiguang Shan
CVBM
16
13
0
05 Jul 2021
AutoFormer: Searching Transformers for Visual Recognition
AutoFormer: Searching Transformers for Visual Recognition
Minghao Chen
Houwen Peng
Jianlong Fu
Haibin Ling
ViT
36
259
0
01 Jul 2021
Focal Self-attention for Local-Global Interactions in Vision
  Transformers
Focal Self-attention for Local-Global Interactions in Vision Transformers
Jianwei Yang
Chunyuan Li
Pengchuan Zhang
Xiyang Dai
Bin Xiao
Lu Yuan
Jianfeng Gao
ViT
42
428
0
01 Jul 2021
TransSC: Transformer-based Shape Completion for Grasp Evaluation
TransSC: Transformer-based Shape Completion for Grasp Evaluation
Wenkai Chen
Hongzhuo Liang
Zhaopeng Chen
F. Sun
Jianwei Zhang
ViT
10
14
0
01 Jul 2021
ViTAS: Vision Transformer Architecture Search
ViTAS: Vision Transformer Architecture Search
Xiu Su
Shan You
Jiyang Xie
Mingkai Zheng
Fei-Yue Wang
Chao Qian
Changshui Zhang
Xiaogang Wang
Chang Xu
ViT
16
54
0
25 Jun 2021
VOLO: Vision Outlooker for Visual Recognition
VOLO: Vision Outlooker for Visual Recognition
Li-xin Yuan
Qibin Hou
Zihang Jiang
Jiashi Feng
Shuicheng Yan
ViT
43
313
0
24 Jun 2021
Vision Permutator: A Permutable MLP-Like Architecture for Visual
  Recognition
Vision Permutator: A Permutable MLP-Like Architecture for Visual Recognition
Qibin Hou
Zihang Jiang
Li-xin Yuan
Mingg-Ming Cheng
Shuicheng Yan
Jiashi Feng
ViT
MLLM
24
205
0
23 Jun 2021
Probabilistic Attention for Interactive Segmentation
Probabilistic Attention for Interactive Segmentation
Prasad Gabbur
Manjot Bilkhu
J. Movellan
24
13
0
23 Jun 2021
TokenLearner: What Can 8 Learned Tokens Do for Images and Videos?
TokenLearner: What Can 8 Learned Tokens Do for Images and Videos?
Michael S. Ryoo
A. Piergiovanni
Anurag Arnab
Mostafa Dehghani
A. Angelova
ViT
21
127
0
21 Jun 2021
How to train your ViT? Data, Augmentation, and Regularization in Vision
  Transformers
How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers
Andreas Steiner
Alexander Kolesnikov
Xiaohua Zhai
Ross Wightman
Jakob Uszkoreit
Lucas Beyer
ViT
34
613
0
18 Jun 2021
Scene Transformer: A unified architecture for predicting multiple agent
  trajectories
Scene Transformer: A unified architecture for predicting multiple agent trajectories
Jiquan Ngiam
Benjamin Caine
Vijay Vasudevan
Zhengdong Zhang
H. Chiang
...
Ashish Venugopal
David J. Weiss
Benjamin Sapp
Zhifeng Chen
Jonathon Shlens
22
153
0
15 Jun 2021
Space-time Mixing Attention for Video Transformer
Space-time Mixing Attention for Video Transformer
Adrian Bulat
Juan-Manuel Perez-Rua
Swathikiran Sudhakaran
Brais Martínez
Georgios Tzimiropoulos
ViT
25
124
0
10 Jun 2021
Transformed CNNs: recasting pre-trained convolutional layers with
  self-attention
Transformed CNNs: recasting pre-trained convolutional layers with self-attention
Stéphane dÁscoli
Levent Sagun
Giulio Biroli
Ari S. Morcos
ViT
8
6
0
10 Jun 2021
CoAtNet: Marrying Convolution and Attention for All Data Sizes
CoAtNet: Marrying Convolution and Attention for All Data Sizes
Zihang Dai
Hanxiao Liu
Quoc V. Le
Mingxing Tan
ViT
49
1,167
0
09 Jun 2021
Scaling Vision Transformers
Scaling Vision Transformers
Xiaohua Zhai
Alexander Kolesnikov
N. Houlsby
Lucas Beyer
ViT
32
1,058
0
08 Jun 2021
Refiner: Refining Self-attention for Vision Transformers
Refiner: Refining Self-attention for Vision Transformers
Daquan Zhou
Yujun Shi
Bingyi Kang
Weihao Yu
Zihang Jiang
Yuan Li
Xiaojie Jin
Qibin Hou
Jiashi Feng
ViT
21
59
0
07 Jun 2021
ViTAE: Vision Transformer Advanced by Exploring Intrinsic Inductive Bias
ViTAE: Vision Transformer Advanced by Exploring Intrinsic Inductive Bias
Yufei Xu
Qiming Zhang
Jing Zhang
Dacheng Tao
ViT
48
329
0
07 Jun 2021
Vision Transformers with Hierarchical Attention
Vision Transformers with Hierarchical Attention
Yun-Hai Liu
Yu-Huan Wu
Guolei Sun
Le Zhang
Ajad Chhatkuli
Luc Van Gool
ViT
30
32
0
06 Jun 2021
RegionViT: Regional-to-Local Attention for Vision Transformers
RegionViT: Regional-to-Local Attention for Vision Transformers
Chun-Fu Chen
Rameswar Panda
Quanfu Fan
ViT
16
193
0
04 Jun 2021
X-volution: On the unification of convolution and self-attention
X-volution: On the unification of convolution and self-attention
Xuanhong Chen
Hang Wang
Bingbing Ni
ViT
25
23
0
04 Jun 2021
A Comparison for Anti-noise Robustness of Deep Learning Classification
  Methods on a Tiny Object Image Dataset: from Convolutional Neural Network to
  Visual Transformer and Performer
A Comparison for Anti-noise Robustness of Deep Learning Classification Methods on a Tiny Object Image Dataset: from Convolutional Neural Network to Visual Transformer and Performer
Ao Chen
Chen Li
Hao Chen
Hechen Yang
Penghui Zhao
Weiming Hu
Wanli Liu
Shuojia Zou
M. Grzegorzek
16
2
0
03 Jun 2021
Attention mechanisms and deep learning for machine vision: A survey of
  the state of the art
Attention mechanisms and deep learning for machine vision: A survey of the state of the art
A. M. Hafiz
S. A. Parah
R. A. Bhat
19
45
0
03 Jun 2021
Container: Context Aggregation Network
Container: Context Aggregation Network
Peng Gao
Jiasen Lu
Hongsheng Li
Roozbeh Mottaghi
Aniruddha Kembhavi
ViT
11
69
0
02 Jun 2021
MSG-Transformer: Exchanging Local Spatial Information by Manipulating
  Messenger Tokens
MSG-Transformer: Exchanging Local Spatial Information by Manipulating Messenger Tokens
Jiemin Fang
Lingxi Xie
Xinggang Wang
Xiaopeng Zhang
Wenyu Liu
Qi Tian
ViT
13
73
0
31 May 2021
Analogous to Evolutionary Algorithm: Designing a Unified Sequence Model
Analogous to Evolutionary Algorithm: Designing a Unified Sequence Model
Jiangning Zhang
Chao Xu
Jian Li
Wenzhou Chen
Yabiao Wang
Ying Tai
Shuo Chen
Chengjie Wang
Feiyue Huang
Yong Liu
27
22
0
31 May 2021
Previous
1234567
Next