44

LAER-MoE: Load-Adaptive Expert Re-layout for Efficient Mixture-of-Experts Training

Xinyi Liu
Yujie Wang
Fangcheng Fu
Xuefeng Xiao
Huixia Li
Jiashi Li
Bin Cui
Main:14 Pages
12 Figures
Bibliography:4 Pages
4 Tables
Appendix:1 Pages
Abstract

Expert parallelism is vital for effectively training Mixture-of-Experts (MoE) models, enabling different devices to host distinct experts, with each device processing different input data. However, during expert parallel training, dynamic routing results in significant load imbalance among experts: a handful of overloaded experts hinder overall iteration, emerging as a training bottleneck.In this paper, we introduce LAER-MoE, an efficient MoE training framework. The core of LAER-MoE is a novel parallel paradigm, Fully Sharded Expert Parallel (FSEP), which fully partitions each expert parameter by the number of devices and restores partial experts at expert granularity through All-to-All communication during training. This allows for flexible re-layout of expert parameters during training to enhance load balancing. In particular, we perform fine-grained scheduling of communication operations to minimize communication overhead. Additionally, we develop a load balancing planner to formulate re-layout strategies of experts and routing schemes for tokens during training. We perform experiments on an A100 cluster, and the results indicate that our system achieves up to 1.69x acceleration compared to the current state-of-the-art training systems. Source code available atthis https URL.

View on arXiv
Comments on this paper