ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.16716
  4. Cited By
BASE Layers: Simplifying Training of Large, Sparse Models

BASE Layers: Simplifying Training of Large, Sparse Models

30 March 2021
M. Lewis
Shruti Bhosale
Tim Dettmers
Naman Goyal
Luke Zettlemoyer
    MoE
ArXivPDFHTML

Papers citing "BASE Layers: Simplifying Training of Large, Sparse Models"

50 / 208 papers shown
Title
Accelerating Distributed MoE Training and Inference with Lina
Accelerating Distributed MoE Training and Inference with Lina
Jiamin Li
Yimin Jiang
Yibo Zhu
Cong Wang
Hong-Yu Xu
MoE
25
60
0
31 Oct 2022
M$^3$ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task
  Learning with Model-Accelerator Co-design
M3^33ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design
Hanxue Liang
Zhiwen Fan
Rishov Sarkar
Ziyu Jiang
Tianlong Chen
Kai Zou
Yu Cheng
Cong Hao
Zhangyang Wang
MoE
42
82
0
26 Oct 2022
On the Adversarial Robustness of Mixture of Experts
On the Adversarial Robustness of Mixture of Experts
J. Puigcerver
Rodolphe Jenatton
C. Riquelme
Pranjal Awasthi
Srinadh Bhojanapalli
OOD
AAML
MoE
48
18
0
19 Oct 2022
AutoMoE: Heterogeneous Mixture-of-Experts with Adaptive Computation for
  Efficient Neural Machine Translation
AutoMoE: Heterogeneous Mixture-of-Experts with Adaptive Computation for Efficient Neural Machine Translation
Ganesh Jawahar
Subhabrata Mukherjee
Xiaodong Liu
Young Jin Kim
Muhammad Abdul-Mageed
L. Lakshmanan
Ahmed Hassan Awadallah
Sébastien Bubeck
Jianfeng Gao
MoE
38
5
0
14 Oct 2022
Mixture of Attention Heads: Selecting Attention Heads Per Token
Mixture of Attention Heads: Selecting Attention Heads Per Token
Xiaofeng Zhang
Songlin Yang
Zeyu Huang
Jie Zhou
Wenge Rong
Zhang Xiong
MoE
99
42
0
11 Oct 2022
Sparsity-Constrained Optimal Transport
Sparsity-Constrained Optimal Transport
Tianlin Liu
J. Puigcerver
Mathieu Blondel
OT
36
22
0
30 Sep 2022
A Review of Sparse Expert Models in Deep Learning
A Review of Sparse Expert Models in Deep Learning
W. Fedus
J. Dean
Barret Zoph
MoE
25
144
0
04 Sep 2022
Branch-Train-Merge: Embarrassingly Parallel Training of Expert Language
  Models
Branch-Train-Merge: Embarrassingly Parallel Training of Expert Language Models
Margaret Li
Suchin Gururangan
Tim Dettmers
M. Lewis
Tim Althoff
Noah A. Smith
Luke Zettlemoyer
MoMe
41
144
0
05 Aug 2022
BlenderBot 3: a deployed conversational agent that continually learns to
  responsibly engage
BlenderBot 3: a deployed conversational agent that continually learns to responsibly engage
Kurt Shuster
Jing Xu
M. Komeili
Da Ju
Eric Michael Smith
...
Naman Goyal
Arthur Szlam
Y-Lan Boureau
Melanie Kambadur
Jason Weston
LM&Ro
KELM
37
235
0
05 Aug 2022
Towards Understanding Mixture of Experts in Deep Learning
Towards Understanding Mixture of Experts in Deep Learning
Zixiang Chen
Yihe Deng
Yue-bo Wu
Quanquan Gu
Yuan-Fang Li
MLT
MoE
42
53
0
04 Aug 2022
The Neural Race Reduction: Dynamics of Abstraction in Gated Networks
The Neural Race Reduction: Dynamics of Abstraction in Gated Networks
Andrew M. Saxe
Shagun Sodhani
Sam Lewallen
AI4CE
32
34
0
21 Jul 2022
New Auction Algorithms for Path Planning, Network Transport, and
  Reinforcement Learning
New Auction Algorithms for Path Planning, Network Transport, and Reinforcement Learning
Dimitri Bertsekas
11
2
0
19 Jul 2022
MoEC: Mixture of Expert Clusters
MoEC: Mixture of Expert Clusters
Yuan Xie
Shaohan Huang
Tianyu Chen
Furu Wei
MoE
45
11
0
19 Jul 2022
Neural Implicit Dictionary via Mixture-of-Expert Training
Neural Implicit Dictionary via Mixture-of-Expert Training
Peihao Wang
Zhiwen Fan
Tianlong Chen
Zhangyang Wang
25
12
0
08 Jul 2022
Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter
  Encoders for Natural Language Understanding Systems
Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems
Jack G. M. FitzGerald
Shankar Ananthakrishnan
Konstantine Arkoudas
Davide Bernardi
Abhishek Bhagia
...
Pan Wei
Haiyang Yu
Shuai Zheng
Gokhan Tur
Premkumar Natarajan
ELM
14
30
0
15 Jun 2022
DIRECTOR: Generator-Classifiers For Supervised Language Modeling
DIRECTOR: Generator-Classifiers For Supervised Language Modeling
Kushal Arora
Kurt Shuster
Sainbayar Sukhbaatar
Jason Weston
VLM
32
40
0
15 Jun 2022
Tutel: Adaptive Mixture-of-Experts at Scale
Tutel: Adaptive Mixture-of-Experts at Scale
Changho Hwang
Wei Cui
Yifan Xiong
Ziyue Yang
Ze Liu
...
Joe Chau
Peng Cheng
Fan Yang
Mao Yang
Y. Xiong
MoE
118
112
0
07 Jun 2022
Multimodal Contrastive Learning with LIMoE: the Language-Image Mixture
  of Experts
Multimodal Contrastive Learning with LIMoE: the Language-Image Mixture of Experts
Basil Mustafa
C. Riquelme
J. Puigcerver
Rodolphe Jenatton
N. Houlsby
VLM
MoE
33
185
0
06 Jun 2022
Task-Specific Expert Pruning for Sparse Mixture-of-Experts
Task-Specific Expert Pruning for Sparse Mixture-of-Experts
Tianyu Chen
Shaohan Huang
Yuan Xie
Binxing Jiao
Daxin Jiang
Haoyi Zhou
Jianxin Li
Furu Wei
MoE
39
40
0
01 Jun 2022
Gating Dropout: Communication-efficient Regularization for Sparsely
  Activated Transformers
Gating Dropout: Communication-efficient Regularization for Sparsely Activated Transformers
R. Liu
Young Jin Kim
Alexandre Muzio
Hany Awadalla
MoE
55
22
0
28 May 2022
Sparse Mixers: Combining MoE and Mixing to build a more efficient BERT
Sparse Mixers: Combining MoE and Mixing to build a more efficient BERT
James Lee-Thorp
Joshua Ainslie
MoE
34
11
0
24 May 2022
Sparsely-gated Mixture-of-Expert Layers for CNN Interpretability
Sparsely-gated Mixture-of-Expert Layers for CNN Interpretability
Svetlana Pavlitska
Christian Hubschneider
Lukas Struppek
J. Marius Zöllner
MoE
37
11
0
22 Apr 2022
On the Representation Collapse of Sparse Mixture of Experts
On the Representation Collapse of Sparse Mixture of Experts
Zewen Chi
Li Dong
Shaohan Huang
Damai Dai
Shuming Ma
...
Payal Bajaj
Xia Song
Xian-Ling Mao
Heyan Huang
Furu Wei
MoMe
MoE
53
97
0
20 Apr 2022
StableMoE: Stable Routing Strategy for Mixture of Experts
StableMoE: Stable Routing Strategy for Mixture of Experts
Damai Dai
Li Dong
Shuming Ma
Bo Zheng
Zhifang Sui
Baobao Chang
Furu Wei
MoE
24
62
0
18 Apr 2022
Sparsely Activated Mixture-of-Experts are Robust Multi-Task Learners
Sparsely Activated Mixture-of-Experts are Robust Multi-Task Learners
Shashank Gupta
Subhabrata Mukherjee
K. Subudhi
Eduardo Gonzalez
Damien Jose
Ahmed Hassan Awadallah
Jianfeng Gao
MoE
27
49
0
16 Apr 2022
MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided
  Adaptation
MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation
Simiao Zuo
Qingru Zhang
Chen Liang
Pengcheng He
T. Zhao
Weizhu Chen
MoE
30
38
0
15 Apr 2022
HetuMoE: An Efficient Trillion-scale Mixture-of-Expert Distributed
  Training System
HetuMoE: An Efficient Trillion-scale Mixture-of-Expert Distributed Training System
Xiaonan Nie
Pinxue Zhao
Xupeng Miao
Tong Zhao
Bin Cui
MoE
26
36
0
28 Mar 2022
Language Models that Seek for Knowledge: Modular Search & Generation for
  Dialogue and Prompt Completion
Language Models that Seek for Knowledge: Modular Search & Generation for Dialogue and Prompt Completion
Kurt Shuster
M. Komeili
Leonard Adolphs
Stephen Roller
Arthur Szlam
Jason Weston
KELM
45
122
0
24 Mar 2022
Efficient Language Modeling with Sparse all-MLP
Efficient Language Modeling with Sparse all-MLP
Ping Yu
Mikel Artetxe
Myle Ott
Sam Shleifer
Hongyu Gong
Ves Stoyanov
Xian Li
MoE
23
11
0
14 Mar 2022
Parameter-Efficient Mixture-of-Experts Architecture for Pre-trained
  Language Models
Parameter-Efficient Mixture-of-Experts Architecture for Pre-trained Language Models
Ze-Feng Gao
Peiyu Liu
Wayne Xin Zhao
Zhong-Yi Lu
Ji-Rong Wen
MoE
24
27
0
02 Mar 2022
Mixture-of-Experts with Expert Choice Routing
Mixture-of-Experts with Expert Choice Routing
Yan-Quan Zhou
Tao Lei
Han-Chu Liu
Nan Du
Yanping Huang
Vincent Zhao
Andrew M. Dai
Zhifeng Chen
Quoc V. Le
James Laudon
MoE
160
331
0
18 Feb 2022
ST-MoE: Designing Stable and Transferable Sparse Expert Models
ST-MoE: Designing Stable and Transferable Sparse Expert Models
Barret Zoph
Irwan Bello
Sameer Kumar
Nan Du
Yanping Huang
J. Dean
Noam M. Shazeer
W. Fedus
MoE
24
183
0
17 Feb 2022
A Survey on Dynamic Neural Networks for Natural Language Processing
A Survey on Dynamic Neural Networks for Natural Language Processing
Canwen Xu
Julian McAuley
AI4CE
30
28
0
15 Feb 2022
Unified Scaling Laws for Routed Language Models
Unified Scaling Laws for Routed Language Models
Aidan Clark
Diego de Las Casas
Aurelia Guy
A. Mensch
Michela Paganini
...
Oriol Vinyals
Jack W. Rae
Erich Elsen
Koray Kavukcuoglu
Karen Simonyan
MoE
27
177
0
02 Feb 2022
Nonlinear Initialization Methods for Low-Rank Neural Networks
Nonlinear Initialization Methods for Low-Rank Neural Networks
Kiran Vodrahalli
Rakesh Shivanna
M. Sathiamoorthy
Sagar Jain
Ed H. Chi
19
4
0
02 Feb 2022
One Student Knows All Experts Know: From Sparse to Dense
One Student Knows All Experts Know: From Sparse to Dense
Fuzhao Xue
Xiaoxin He
Xiaozhe Ren
Yuxuan Lou
Yang You
MoMe
MoE
40
20
0
26 Jan 2022
EvoMoE: An Evolutional Mixture-of-Experts Training Framework via
  Dense-To-Sparse Gate
EvoMoE: An Evolutional Mixture-of-Experts Training Framework via Dense-To-Sparse Gate
Xiaonan Nie
Xupeng Miao
Shijie Cao
Lingxiao Ma
Qibin Liu
Jilong Xue
Youshan Miao
Yi Liu
Zhi-Xin Yang
Bin Cui
MoMe
MoE
29
23
0
29 Dec 2021
ERNIE 3.0 Titan: Exploring Larger-scale Knowledge Enhanced Pre-training
  for Language Understanding and Generation
ERNIE 3.0 Titan: Exploring Larger-scale Knowledge Enhanced Pre-training for Language Understanding and Generation
Shuohuan Wang
Yu Sun
Yang Xiang
Zhihua Wu
Siyu Ding
...
Tian Wu
Wei Zeng
Ge Li
Wen Gao
Haifeng Wang
ELM
39
79
0
23 Dec 2021
Efficient Large Scale Language Modeling with Mixtures of Experts
Efficient Large Scale Language Modeling with Mixtures of Experts
Mikel Artetxe
Shruti Bhosale
Naman Goyal
Todor Mihaylov
Myle Ott
...
Jeff Wang
Luke Zettlemoyer
Mona T. Diab
Zornitsa Kozareva
Ves Stoyanov
MoE
61
188
0
20 Dec 2021
GLaM: Efficient Scaling of Language Models with Mixture-of-Experts
GLaM: Efficient Scaling of Language Models with Mixture-of-Experts
Nan Du
Yanping Huang
Andrew M. Dai
Simon Tong
Dmitry Lepikhin
...
Kun Zhang
Quoc V. Le
Yonghui Wu
Zhehuai Chen
Claire Cui
ALM
MoE
72
775
0
13 Dec 2021
Tricks for Training Sparse Translation Models
Tricks for Training Sparse Translation Models
Dheeru Dua
Shruti Bhosale
Vedanuj Goswami
James Cross
M. Lewis
Angela Fan
MoE
150
19
0
15 Oct 2021
Taming Sparsely Activated Transformer with Stochastic Experts
Taming Sparsely Activated Transformer with Stochastic Experts
Simiao Zuo
Xiaodong Liu
Jian Jiao
Young Jin Kim
Hany Hassan
Ruofei Zhang
T. Zhao
Jianfeng Gao
MoE
44
109
0
08 Oct 2021
8-bit Optimizers via Block-wise Quantization
8-bit Optimizers via Block-wise Quantization
Tim Dettmers
M. Lewis
Sam Shleifer
Luke Zettlemoyer
MQ
34
276
0
06 Oct 2021
MoEfication: Transformer Feed-forward Layers are Mixtures of Experts
MoEfication: Transformer Feed-forward Layers are Mixtures of Experts
Zhengyan Zhang
Yankai Lin
Zhiyuan Liu
Peng Li
Maosong Sun
Jie Zhou
MoE
29
118
0
05 Oct 2021
Beyond Distillation: Task-level Mixture-of-Experts for Efficient
  Inference
Beyond Distillation: Task-level Mixture-of-Experts for Efficient Inference
Sneha Kudugunta
Yanping Huang
Ankur Bapna
M. Krikun
Dmitry Lepikhin
Minh-Thang Luong
Orhan Firat
MoE
119
107
0
24 Sep 2021
Unbiased Gradient Estimation with Balanced Assignments for Mixtures of
  Experts
Unbiased Gradient Estimation with Balanced Assignments for Mixtures of Experts
W. Kool
Chris J. Maddison
A. Mnih
34
10
0
24 Sep 2021
Train Short, Test Long: Attention with Linear Biases Enables Input
  Length Extrapolation
Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation
Ofir Press
Noah A. Smith
M. Lewis
253
701
0
27 Aug 2021
Towards Structured Dynamic Sparse Pre-Training of BERT
Towards Structured Dynamic Sparse Pre-Training of BERT
A. Dietrich
Frithjof Gressmann
Douglas Orr
Ivan Chelombiev
Daniel Justus
Carlo Luschi
30
17
0
13 Aug 2021
DEMix Layers: Disentangling Domains for Modular Language Modeling
DEMix Layers: Disentangling Domains for Modular Language Modeling
Suchin Gururangan
Michael Lewis
Ari Holtzman
Noah A. Smith
Luke Zettlemoyer
KELM
MoE
21
128
0
11 Aug 2021
CPM-2: Large-scale Cost-effective Pre-trained Language Models
CPM-2: Large-scale Cost-effective Pre-trained Language Models
Zhengyan Zhang
Yuxian Gu
Xu Han
Shengqi Chen
Chaojun Xiao
...
Minlie Huang
Wentao Han
Yang Liu
Xiaoyan Zhu
Maosong Sun
MoE
45
86
0
20 Jun 2021
Previous
12345
Next