Transformer Vs. MLP-Mixer: Exponential Expressive Gap For NLP Problems

Transformer Vs. MLP-Mixer: Exponential Expressive Gap For NLP Problems

Papers citing "Transformer Vs. MLP-Mixer: Exponential Expressive Gap For NLP Problems"

30 / 30 papers shown
Title
DaViT: Dual Attention Vision Transformers
DaViT: Dual Attention Vision Transformers
Mingyu Ding
Bin Xiao
Noel Codella
Ping Luo
Jingdong Wang
Lu Yuan
90
248
0
07 Apr 2022

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.