Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2102.03902
Cited By
Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention
7 February 2021
Yunyang Xiong
Zhanpeng Zeng
Rudrasis Chakraborty
Mingxing Tan
G. Fung
Yin Li
Vikas Singh
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention"
16 / 116 papers shown
Title
UFO-ViT: High Performance Linear Vision Transformer without Softmax
Jeonggeun Song
ViT
114
20
0
29 Sep 2021
PermuteFormer: Efficient Relative Position Encoding for Long Sequences
Peng-Jen Chen
36
21
0
06 Sep 2021
PKSpell: Data-Driven Pitch Spelling and Key Signature Estimation
Francesco Foscarin
Nicolas Audebert
Raphaël Fournier-S’niehotta
17
11
0
27 Jul 2021
XCiT: Cross-Covariance Image Transformers
Alaaeldin El-Nouby
Hugo Touvron
Mathilde Caron
Piotr Bojanowski
Matthijs Douze
...
Ivan Laptev
Natalia Neverova
Gabriel Synnaeve
Jakob Verbeek
Hervé Jégou
ViT
42
499
0
17 Jun 2021
Keeping Your Eye on the Ball: Trajectory Attention in Video Transformers
Mandela Patrick
Dylan Campbell
Yuki M. Asano
Ishan Misra
Ishan Misra Florian Metze
Christoph Feichtenhofer
Andrea Vedaldi
João F. Henriques
24
274
0
09 Jun 2021
A Survey of Transformers
Tianyang Lin
Yuxin Wang
Xiangyang Liu
Xipeng Qiu
ViT
53
1,088
0
08 Jun 2021
TransMIL: Transformer based Correlated Multiple Instance Learning for Whole Slide Image Classification
Zhucheng Shao
Hao Bian
Yang Chen
Yifeng Wang
Jian Zhang
Xiangyang Ji
Yongbing Zhang
ViT
MedIm
37
635
0
02 Jun 2021
Choose a Transformer: Fourier or Galerkin
Shuhao Cao
42
225
0
31 May 2021
U-Net Transformer: Self and Cross Attention for Medical Image Segmentation
Olivier Petit
Nicolas Thome
Clément Rambour
L. Soler
ViT
MedIm
35
237
0
10 Mar 2021
Perceiver: General Perception with Iterative Attention
Andrew Jaegle
Felix Gimeno
Andrew Brock
Andrew Zisserman
Oriol Vinyals
João Carreira
VLM
ViT
MDE
91
976
0
04 Mar 2021
LambdaNetworks: Modeling Long-Range Interactions Without Attention
Irwan Bello
281
179
0
17 Feb 2021
TransGAN: Two Pure Transformers Can Make One Strong GAN, and That Can Scale Up
Yi Ding
Shiyu Chang
Zhangyang Wang
ViT
29
382
0
14 Feb 2021
Transformers in Vision: A Survey
Salman Khan
Muzammal Naseer
Munawar Hayat
Syed Waqas Zamir
Fahad Shahbaz Khan
M. Shah
ViT
227
2,431
0
04 Jan 2021
Efficient Transformers: A Survey
Yi Tay
Mostafa Dehghani
Dara Bahri
Donald Metzler
VLM
114
1,102
0
14 Sep 2020
Big Bird: Transformers for Longer Sequences
Manzil Zaheer
Guru Guruganesh
Kumar Avinava Dubey
Joshua Ainslie
Chris Alberti
...
Philip Pham
Anirudh Ravula
Qifan Wang
Li Yang
Amr Ahmed
VLM
285
2,017
0
28 Jul 2020
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
299
6,984
0
20 Apr 2018
Previous
1
2
3