ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.01601
  4. Cited By
MLP-Mixer: An all-MLP Architecture for Vision
v1v2v3v4 (latest)

MLP-Mixer: An all-MLP Architecture for Vision

4 May 2021
Ilya O. Tolstikhin
N. Houlsby
Alexander Kolesnikov
Lucas Beyer
Xiaohua Zhai
Thomas Unterthiner
Jessica Yung
Andreas Steiner
Daniel Keysers
Jakob Uszkoreit
Mario Lucic
Alexey Dosovitskiy
ArXiv (abs)PDFHTML

Papers citing "MLP-Mixer: An all-MLP Architecture for Vision"

44 / 1,144 papers shown
Title
NeRF in detail: Learning to sample for view synthesis
NeRF in detail: Learning to sample for view synthesis
Relja Arandjelović
Andrew Zisserman
81
42
0
09 Jun 2021
Knowledge distillation: A good teacher is patient and consistent
Knowledge distillation: A good teacher is patient and consistent
Lucas Beyer
Xiaohua Zhai
Amelie Royer
L. Markeeva
Rohan Anil
Alexander Kolesnikov
VLM
109
299
0
09 Jun 2021
On the Connection between Local Attention and Dynamic Depth-wise
  Convolution
On the Connection between Local Attention and Dynamic Depth-wise Convolution
Qi Han
Zejia Fan
Qi Dai
Lei-huan Sun
Ming-Ming Cheng
Jiaying Liu
Jingdong Wang
ViT
119
111
0
08 Jun 2021
On Improving Adversarial Transferability of Vision Transformers
On Improving Adversarial Transferability of Vision Transformers
Muzammal Naseer
Kanchana Ranasinghe
Salman Khan
Fahad Shahbaz Khan
Fatih Porikli
ViT
91
95
0
08 Jun 2021
A Lightweight and Gradient-Stable Neural Layer
A Lightweight and Gradient-Stable Neural Layer
Yueyao Yu
Yin Zhang
87
0
0
08 Jun 2021
Graph-MLP: Node Classification without Message Passing in Graph
Graph-MLP: Node Classification without Message Passing in Graph
Yang Hu
Haoxuan You
Zhecan Wang
Zhicheng Wang
Erjin Zhou
Yue Gao
123
114
0
08 Jun 2021
ViTAE: Vision Transformer Advanced by Exploring Intrinsic Inductive Bias
ViTAE: Vision Transformer Advanced by Exploring Intrinsic Inductive Bias
Yufei Xu
Qiming Zhang
Jing Zhang
Dacheng Tao
ViT
185
341
0
07 Jun 2021
Vision Transformers with Hierarchical Attention
Vision Transformers with Hierarchical Attention
Yun-Hai Liu
Yu-Huan Wu
Guolei Sun
Le Zhang
Ajad Chhatkuli
Luc Van Gool
ViT
74
39
0
06 Jun 2021
Exploring the Limits of Out-of-Distribution Detection
Exploring the Limits of Out-of-Distribution Detection
Stanislav Fort
Jie Jessie Ren
Balaji Lakshminarayanan
99
341
0
06 Jun 2021
When Vision Transformers Outperform ResNets without Pre-training or
  Strong Data Augmentations
When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations
Xiangning Chen
Cho-Jui Hsieh
Boqing Gong
ViT
103
329
0
03 Jun 2021
Container: Context Aggregation Network
Container: Context Aggregation Network
Peng Gao
Jiasen Lu
Hongsheng Li
Roozbeh Mottaghi
Aniruddha Kembhavi
ViT
97
72
0
02 Jun 2021
Can Attention Enable MLPs To Catch Up With CNNs?
Can Attention Enable MLPs To Catch Up With CNNs?
Meng-Hao Guo
Zheng-Ning Liu
Tai-Jiang Mu
Dun Liang
Ralph Robert Martin
Shimin Hu
AAML
74
17
0
31 May 2021
A remark on a paper of Krotov and Hopfield [arXiv:2008.06996]
A remark on a paper of Krotov and Hopfield [arXiv:2008.06996]
Fei Tang
Michael K Kopp
62
11
0
31 May 2021
Choose a Transformer: Fourier or Galerkin
Choose a Transformer: Fourier or Galerkin
Shuhao Cao
88
256
0
31 May 2021
MixerGAN: An MLP-Based Architecture for Unpaired Image-to-Image
  Translation
MixerGAN: An MLP-Based Architecture for Unpaired Image-to-Image Translation
George Cazenavette
Manuel Ladron de Guevara
72
17
0
28 May 2021
An Attention Free Transformer
An Attention Free Transformer
Shuangfei Zhai
Walter A. Talbott
Nitish Srivastava
Chen Huang
Hanlin Goh
Ruixiang Zhang
J. Susskind
ViT
91
132
0
28 May 2021
On the Bias Against Inductive Biases
On the Bias Against Inductive Biases
George Cazenavette
Simon Lucey
SSL
37
2
0
28 May 2021
Pay Attention to MLPs
Pay Attention to MLPs
Hanxiao Liu
Zihang Dai
David R. So
Quoc V. Le
AI4CE
146
667
0
17 May 2021
Brain Inspired Face Recognition: A Computational Framework
Brain Inspired Face Recognition: A Computational Framework
P. Chowdhury
Angad Wadhwa
Nikhil Tyagi
CVBM
59
4
0
15 May 2021
FNet: Mixing Tokens with Fourier Transforms
FNet: Mixing Tokens with Fourier Transforms
James Lee-Thorp
Joshua Ainslie
Ilya Eckstein
Santiago Ontanon
125
533
0
09 May 2021
ResMLP: Feedforward networks for image classification with
  data-efficient training
ResMLP: Feedforward networks for image classification with data-efficient training
Hugo Touvron
Piotr Bojanowski
Mathilde Caron
Matthieu Cord
Alaaeldin El-Nouby
...
Gautier Izacard
Armand Joulin
Gabriel Synnaeve
Jakob Verbeek
Hervé Jégou
VLM
82
671
0
07 May 2021
RepMLP: Re-parameterizing Convolutions into Fully-connected Layers for
  Image Recognition
RepMLP: Re-parameterizing Convolutions into Fully-connected Layers for Image Recognition
Xiaohan Ding
Chunlong Xia
Xinming Zhang
Xiaojie Chu
Jungong Han
Guiguang Ding
78
96
0
05 May 2021
Sifting out the features by pruning: Are convolutional networks the
  winning lottery ticket of fully connected ones?
Sifting out the features by pruning: Are convolutional networks the winning lottery ticket of fully connected ones?
Franco Pellegrini
Giulio Biroli
109
6
0
27 Apr 2021
ImageNet-21K Pretraining for the Masses
ImageNet-21K Pretraining for the Masses
T. Ridnik
Emanuel Ben-Baruch
Asaf Noy
Lihi Zelnik-Manor
SSegVLMCLIP
341
716
0
22 Apr 2021
All Tokens Matter: Token Labeling for Training Better Vision
  Transformers
All Tokens Matter: Token Labeling for Training Better Vision Transformers
Zihang Jiang
Qibin Hou
Li-xin Yuan
Daquan Zhou
Yujun Shi
Xiaojie Jin
Anran Wang
Jiashi Feng
ViT
131
209
0
22 Apr 2021
Cloth Interactive Transformer for Virtual Try-On
Cloth Interactive Transformer for Virtual Try-On
Bin Ren
Hao Tang
Fanyang Meng
Runwei Ding
Philip Torr
N. Sebe
ViT
115
34
0
12 Apr 2021
On the Adversarial Robustness of Vision Transformers
On the Adversarial Robustness of Vision Transformers
Rulin Shao
Zhouxing Shi
Jinfeng Yi
Pin-Yu Chen
Cho-Jui Hsieh
ViT
115
145
0
29 Mar 2021
A Practical Survey on Faster and Lighter Transformers
A Practical Survey on Faster and Lighter Transformers
Quentin Fournier
G. Caron
Daniel Aloise
132
102
0
26 Mar 2021
Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
Ze Liu
Yutong Lin
Yue Cao
Han Hu
Yixuan Wei
Zheng Zhang
Stephen Lin
B. Guo
ViT
483
21,656
0
25 Mar 2021
ConViT: Improving Vision Transformers with Soft Convolutional Inductive
  Biases
ConViT: Improving Vision Transformers with Soft Convolutional Inductive Biases
Stéphane dÁscoli
Hugo Touvron
Matthew L. Leavitt
Ari S. Morcos
Giulio Biroli
Levent Sagun
ViT
143
835
0
19 Mar 2021
Understanding Invariance via Feedforward Inversion of Discriminatively
  Trained Classifiers
Understanding Invariance via Feedforward Inversion of Discriminatively Trained Classifiers
Piotr Teterwak
Chiyuan Zhang
Dilip Krishnan
Michael C. Mozer
69
10
0
15 Mar 2021
Red Alarm for Pre-trained Models: Universal Vulnerability to
  Neuron-Level Backdoor Attacks
Red Alarm for Pre-trained Models: Universal Vulnerability to Neuron-Level Backdoor Attacks
Zhengyan Zhang
Guangxuan Xiao
Yongwei Li
Tian Lv
Fanchao Qi
Zhiyuan Liu
Yasheng Wang
Xin Jiang
Maosong Sun
AAML
153
74
0
18 Jan 2021
Transformers in Vision: A Survey
Transformers in Vision: A Survey
Salman Khan
Muzammal Naseer
Munawar Hayat
Syed Waqas Zamir
Fahad Shahbaz Khan
M. Shah
ViT
359
2,549
0
04 Jan 2021
A Survey on Visual Transformer
A Survey on Visual Transformer
Kai Han
Yunhe Wang
Hanting Chen
Xinghao Chen
Jianyuan Guo
...
Chunjing Xu
Yixing Xu
Zhaohui Yang
Yiman Zhang
Dacheng Tao
ViT
229
2,262
0
23 Dec 2020
Machine Learning for Cataract Classification and Grading on Ophthalmic
  Imaging Modalities: A Survey
Machine Learning for Cataract Classification and Grading on Ophthalmic Imaging Modalities: A Survey
Xiaoqin Zhang
Yan Hu
Zunjie Xiao
Jiansheng Fang
Risa Higashita
Jiang-Dong Liu
127
41
0
09 Dec 2020
Efficient Transformers: A Survey
Efficient Transformers: A Survey
Yi Tay
Mostafa Dehghani
Dara Bahri
Donald Metzler
VLM
196
1,132
0
14 Sep 2020
Accurate Lung Nodules Segmentation with Detailed Representation Transfer
  and Soft Mask Supervision
Accurate Lung Nodules Segmentation with Detailed Representation Transfer and Soft Mask Supervision
Changwei Wang
Rongtao Xu
Shibiao Xu
Weiliang Meng
Jun Xiao
Xiaopeng Zhang
104
7
0
29 Jul 2020
Synthesizer: Rethinking Self-Attention in Transformer Models
Synthesizer: Rethinking Self-Attention in Transformer Models
Yi Tay
Dara Bahri
Donald Metzler
Da-Cheng Juan
Zhe Zhao
Che Zheng
70
342
0
02 May 2020
Characterizing Structural Regularities of Labeled Data in
  Overparameterized Models
Characterizing Structural Regularities of Labeled Data in Overparameterized Models
Ziheng Jiang
Chiyuan Zhang
Kunal Talwar
Michael C. Mozer
TDI
68
104
0
08 Feb 2020
Scaling Out-of-Distribution Detection for Real-World Settings
Scaling Out-of-Distribution Detection for Real-World Settings
Dan Hendrycks
Steven Basart
Mantas Mazeika
Andy Zou
Joe Kwon
Mohammadreza Mostajabi
Jacob Steinhardt
Basel Alomair
OODD
211
487
0
25 Nov 2019
On the Relationship between Self-Attention and Convolutional Layers
On the Relationship between Self-Attention and Convolutional Layers
Jean-Baptiste Cordonnier
Andreas Loukas
Martin Jaggi
141
535
0
08 Nov 2019
A Brain-inspired Algorithm for Training Highly Sparse Neural Networks
A Brain-inspired Algorithm for Training Highly Sparse Neural Networks
Zahra Atashgahi
Joost Pieterse
Shiwei Liu
Decebal Constantin Mocanu
Raymond N. J. Veldhuis
Mykola Pechenizkiy
74
15
0
17 Mar 2019
Are All Layers Created Equal?
Are All Layers Created Equal?
Chiyuan Zhang
Samy Bengio
Y. Singer
111
140
0
06 Feb 2019
Xception: Deep Learning with Depthwise Separable Convolutions
Xception: Deep Learning with Depthwise Separable Convolutions
François Chollet
MDEBDLPINN
1.5K
14,648
0
07 Oct 2016
Previous
123...212223