ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1906.05909
  4. Cited By
Stand-Alone Self-Attention in Vision Models

Stand-Alone Self-Attention in Vision Models

13 June 2019
Prajit Ramachandran
Niki Parmar
Ashish Vaswani
Irwan Bello
Anselm Levskaya
Jonathon Shlens
    VLMSLRViT
ArXiv (abs)PDFHTML

Papers citing "Stand-Alone Self-Attention in Vision Models"

50 / 588 papers shown
Title
Neural Sequence-to-grid Module for Learning Symbolic Rules
Neural Sequence-to-grid Module for Learning Symbolic Rules
Segwang Kim
Hyoungwook Nam
Joonyoung Kim
Kyomin Jung
NAI
125
11
0
13 Jan 2021
SA-Det3D: Self-Attention Based Context-Aware 3D Object Detection
SA-Det3D: Self-Attention Based Context-Aware 3D Object Detection
Prarthana Bhattacharyya
Chengjie Huang
Krzysztof Czarnecki
3DPC
88
53
0
07 Jan 2021
Transformers in Vision: A Survey
Transformers in Vision: A Survey
Salman Khan
Muzammal Naseer
Munawar Hayat
Syed Waqas Zamir
Fahad Shahbaz Khan
M. Shah
ViT
385
2,560
0
04 Jan 2021
Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective
  with Transformers
Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers
Sixiao Zheng
Jiachen Lu
Hengshuang Zhao
Xiatian Zhu
Zekun Luo
...
Yanwei Fu
Jianfeng Feng
Tao Xiang
Philip Torr
Li Zhang
ViT
204
2,924
0
31 Dec 2020
TransPose: Keypoint Localization via Transformer
TransPose: Keypoint Localization via Transformer
Sen Yang
Zhibin Quan
Mu Nie
Wankou Yang
ViT
203
270
0
28 Dec 2020
Training data-efficient image transformers & distillation through
  attention
Training data-efficient image transformers & distillation through attention
Hugo Touvron
Matthieu Cord
Matthijs Douze
Francisco Massa
Alexandre Sablayrolles
Hervé Jégou
ViT
402
6,848
0
23 Dec 2020
A Survey on Visual Transformer
A Survey on Visual Transformer
Kai Han
Yunhe Wang
Hanting Chen
Xinghao Chen
Jianyuan Guo
...
Chunjing Xu
Yixing Xu
Zhaohui Yang
Yiman Zhang
Dacheng Tao
ViT
231
2,278
0
23 Dec 2020
Domain Adaptation of NMT models for English-Hindi Machine Translation
  Task at AdapMT ICON 2020
Domain Adaptation of NMT models for English-Hindi Machine Translation Task at AdapMT ICON 2020
Ramchandra Joshi
Rushabh Karnavat
Kaustubh Jirapure
Raviraj Joshi
35
0
0
22 Dec 2020
HyperSeg: Patch-wise Hypernetwork for Real-time Semantic Segmentation
HyperSeg: Patch-wise Hypernetwork for Real-time Semantic Segmentation
Y. Nirkin
Lior Wolf
Tal Hassner
SSeg
83
180
0
21 Dec 2020
AttentionLite: Towards Efficient Self-Attention Models for Vision
AttentionLite: Towards Efficient Self-Attention Models for Vision
Souvik Kundu
Sairam Sundaresan
72
22
0
21 Dec 2020
Towards the Localisation of Lesions in Diabetic Retinopathy
Towards the Localisation of Lesions in Diabetic Retinopathy
Samuel Mensah
B. Bah
Willie Brink
MedIm
39
0
0
21 Dec 2020
Knowledge as Invariance -- History and Perspectives of
  Knowledge-augmented Machine Learning
Knowledge as Invariance -- History and Perspectives of Knowledge-augmented Machine Learning
A. Sagel
Amit Sahu
Stefan Matthes
H. Pfeifer
Tianming Qiu
Harald Ruess
Hao Shen
Julian Wormann
53
3
0
21 Dec 2020
LieTransformer: Equivariant self-attention for Lie Groups
LieTransformer: Equivariant self-attention for Lie Groups
M. Hutchinson
Charline Le Lan
Sheheryar Zaidi
Emilien Dupont
Yee Whye Teh
Hyunjik Kim
132
111
0
20 Dec 2020
Attention-based Image Upsampling
Attention-based Image Upsampling
Souvik Kundu
Hesham Mostafa
S. N. Sridhar
Sairam Sundaresan
SupR
37
11
0
17 Dec 2020
Spatial Context-Aware Self-Attention Model For Multi-Organ Segmentation
Spatial Context-Aware Self-Attention Model For Multi-Organ Segmentation
Hao Tang
Xingwei Liu
Kun Han
Shanlin Sun
Narisu Bai
Xuming Chen
Qian Huang
Yong Liu
Xiaohui Xie
SSegMedIm
95
25
0
16 Dec 2020
Point Transformer
Point Transformer
Hengshuang Zhao
Li Jiang
Jiaya Jia
Philip Torr
V. Koltun
3DPCViT
35
13
0
16 Dec 2020
TDAF: Top-Down Attention Framework for Vision Tasks
TDAF: Top-Down Attention Framework for Vision Tasks
Bo Pang
Yizhuo Li
Jiefeng Li
Muchen Li
Hanwen Cao
Cewu Lu
83
10
0
14 Dec 2020
Convolutional LSTM Neural Networks for Modeling Wildland Fire Dynamics
Convolutional LSTM Neural Networks for Modeling Wildland Fire Dynamics
J. Burge
M. Bonanni
M. Ihme
Lily Hu
90
19
0
11 Dec 2020
Cyclic orthogonal convolutions for long-range integration of features
Cyclic orthogonal convolutions for long-range integration of features
Federica Freddi
Jezabel R. Garcia
Michael Bromberg
Sepehr Jalali
Da-shan Shiu
Alvin Chua
A. Bernacchia
37
0
0
11 Dec 2020
Fine-grained Angular Contrastive Learning with Coarse Labels
Fine-grained Angular Contrastive Learning with Coarse Labels
Guy Bukchin
Eli Schwartz
Kate Saenko
Ori Shahar
Rogerio Feris
Raja Giryes
Leonid Karlinsky
105
54
0
07 Dec 2020
Deep Learning and the Global Workspace Theory
Deep Learning and the Global Workspace Theory
R. V. Rullen
Ryota Kanai
132
68
0
04 Dec 2020
SAFCAR: Structured Attention Fusion for Compositional Action Recognition
SAFCAR: Structured Attention Fusion for Compositional Action Recognition
Tae Soo Kim
Gregory Hager
CoGe
67
10
0
03 Dec 2020
MaX-DeepLab: End-to-End Panoptic Segmentation with Mask Transformers
MaX-DeepLab: End-to-End Panoptic Segmentation with Mask Transformers
Huiyu Wang
Yukun Zhu
Hartwig Adam
Alan Yuille
Liang-Chieh Chen
ViT
150
531
0
01 Dec 2020
Deeper or Wider Networks of Point Clouds with Self-attention?
Haoxi Ran
Li Lu
3DPC
46
1
0
29 Nov 2020
Deep Magnification-Flexible Upsampling over 3D Point Clouds
Deep Magnification-Flexible Upsampling over 3D Point Clouds
Y. Qian
Junhui Hou
Sam Kwong
Ying He
3DPC
77
45
0
25 Nov 2020
Attention Aware Cost Volume Pyramid Based Multi-view Stereo Network for
  3D Reconstruction
Attention Aware Cost Volume Pyramid Based Multi-view Stereo Network for 3D Reconstruction
Anzhu Yu
Wenyue Guo
Bing Liu
Xin Chen
Xin Eric Wang
Xuefeng Cao
Bingchuan Jiang
3DV
80
64
0
25 Nov 2020
Covariance Self-Attention Dual Path UNet for Rectal Tumor Segmentation
Covariance Self-Attention Dual Path UNet for Rectal Tumor Segmentation
Haijun Gao
Bochuan Zheng
Dazhi Pan
Xiangyin Zeng
48
2
0
04 Nov 2020
NAS-FAS: Static-Dynamic Central Difference Network Search for Face
  Anti-Spoofing
NAS-FAS: Static-Dynamic Central Difference Network Search for Face Anti-Spoofing
Zitong Yu
Jun Wan
Yunxiao Qin
Xiaobai Li
Stan Z. Li
Guoying Zhao
CVBM
93
207
0
03 Nov 2020
Real-time Semantic Segmentation with Context Aggregation Network
Real-time Semantic Segmentation with Context Aggregation Network
M. Yang
Saumya Kumaar
Ye Lyu
F. Nex
SSeg
118
63
0
02 Nov 2020
DRF: A Framework for High-Accuracy Autonomous Driving Vehicle Modeling
DRF: A Framework for High-Accuracy Autonomous Driving Vehicle Modeling
Shu Jiang
Yu Wang
Longtao Lin
Weiman Lin
Yu Cao
Jinghao Miao
Qi Luo
36
2
0
01 Nov 2020
ProCAN: Progressive Growing Channel Attentive Non-Local Network for Lung
  Nodule Classification
ProCAN: Progressive Growing Channel Attentive Non-Local Network for Lung Nodule Classification
M. Al-Shabi
Kelvin Shak
Maxine Tan
78
56
0
29 Oct 2020
An Image is Worth 16x16 Words: Transformers for Image Recognition at
  Scale
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
Alexey Dosovitskiy
Lucas Beyer
Alexander Kolesnikov
Dirk Weissenborn
Xiaohua Zhai
...
Matthias Minderer
G. Heigold
Sylvain Gelly
Jakob Uszkoreit
N. Houlsby
ViT
747
41,796
0
22 Oct 2020
Bayesian Attention Modules
Bayesian Attention Modules
Xinjie Fan
Shujian Zhang
Bo Chen
Mingyuan Zhou
183
62
0
20 Oct 2020
Attention Augmented ConvLSTM for Environment Prediction
Attention Augmented ConvLSTM for Environment Prediction
Bernard Lange
Masha Itkina
Mykel J. Kochenderfer
139
21
0
19 Oct 2020
Multimodal Research in Vision and Language: A Review of Current and
  Emerging Trends
Multimodal Research in Vision and Language: A Review of Current and Emerging Trends
Shagun Uppal
Sarthak Bhagat
Devamanyu Hazarika
Navonil Majumdar
Soujanya Poria
Roger Zimmermann
Amir Zadeh
101
6
0
19 Oct 2020
Deformable DETR: Deformable Transformers for End-to-End Object Detection
Deformable DETR: Deformable Transformers for End-to-End Object Detection
Xizhou Zhu
Weijie Su
Lewei Lu
Bin Li
Xiaogang Wang
Jifeng Dai
ViT
357
5,130
0
08 Oct 2020
Global Self-Attention Networks for Image Recognition
Global Self-Attention Networks for Image Recognition
Zhuoran Shen
Irwan Bello
Raviteja Vemulapalli
Xuhui Jia
Ching-Hui Chen
ViT
71
29
0
06 Oct 2020
Group Equivariant Stand-Alone Self-Attention For Vision
Group Equivariant Stand-Alone Self-Attention For Vision
David W. Romero
Jean-Baptiste Cordonnier
MDE
161
60
0
02 Oct 2020
Discovering Dynamic Salient Regions for Spatio-Temporal Graph Neural
  Networks
Discovering Dynamic Salient Regions for Spatio-Temporal Graph Neural Networks
Iulia Duta
Andrei Liviu Nicolicioiu
Marius Leordeanu
70
6
0
17 Sep 2020
Efficient Transformers: A Survey
Efficient Transformers: A Survey
Yi Tay
Mostafa Dehghani
Dara Bahri
Donald Metzler
VLM
227
1,133
0
14 Sep 2020
Visual Concept Reasoning Networks
Visual Concept Reasoning Networks
Taesup Kim
Sungwoong Kim
Yoshua Bengio
71
7
0
26 Aug 2020
Feedback Attention for Cell Image Segmentation
Feedback Attention for Cell Image Segmentation
Hiroki Tsuda
Eisuke Shibuya
Kazuhiro Hotta
61
6
0
14 Aug 2020
SynDistNet: Self-Supervised Monocular Fisheye Camera Distance Estimation
  Synergized with Semantic Segmentation for Autonomous Driving
SynDistNet: Self-Supervised Monocular Fisheye Camera Distance Estimation Synergized with Semantic Segmentation for Autonomous Driving
V. Kumar
Marvin Klingner
S. Yogamani
Stefan Milz
Tim Fingscheidt
Patrick Mäder
MDE
128
81
0
10 Aug 2020
Neural Language Generation: Formulation, Methods, and Evaluation
Neural Language Generation: Formulation, Methods, and Evaluation
Cristina Garbacea
Qiaozhu Mei
158
30
0
31 Jul 2020
Attention as Activation
Attention as Activation
Yimian Dai
Stefan Oehmcke
Fabian Gieseke
Yiquan Wu
Kobus Barnard
38
9
0
15 Jul 2020
Deep Reinforced Attention Learning for Quality-Aware Visual Recognition
Duo Li
Qifeng Chen
71
6
0
13 Jul 2020
Data Movement Is All You Need: A Case Study on Optimizing Transformers
Data Movement Is All You Need: A Case Study on Optimizing Transformers
A. Ivanov
Nikoli Dryden
Tal Ben-Nun
Shigang Li
Torsten Hoefler
142
135
0
30 Jun 2020
Multi-Head Attention: Collaborate Instead of Concatenate
Multi-Head Attention: Collaborate Instead of Concatenate
Jean-Baptiste Cordonnier
Andreas Loukas
Martin Jaggi
82
115
0
29 Jun 2020
Transformers are RNNs: Fast Autoregressive Transformers with Linear
  Attention
Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention
Angelos Katharopoulos
Apoorv Vyas
Nikolaos Pappas
Franccois Fleuret
218
1,798
0
29 Jun 2020
ULSAM: Ultra-Lightweight Subspace Attention Module for Compact
  Convolutional Neural Networks
ULSAM: Ultra-Lightweight Subspace Attention Module for Compact Convolutional Neural Networks
Rajat Saini
N. Jha
B. K. Das
Sparsh Mittal
C.Krishna Mohan
68
83
0
26 Jun 2020
Previous
123...101112
Next