Less is More: Pay Less Attention in Vision Transformers
v1v2v3v4 (latest)

Less is More: Pay Less Attention in Vision Transformers

Bohan Zhuang
Jing Liu
Jianfei Cai
    ViT

Papers citing "Less is More: Pay Less Attention in Vision Transformers"

33 / 33 papers shown
Title
Linformer: Self-Attention with Linear Complexity
Linformer: Self-Attention with Linear Complexity
Sinong Wang
Belinda Z. Li
Madian Khabsa
Han Fang
Hao Ma
216
1,713
0
08 Jun 2020
Layer Normalization
Layer Normalization
413
10,526
0
21 Jul 2016

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.