16

Layer-adaptive Expert Pruning for Pre-Training of Mixture-of-Experts Large Language Models

YuanLab.ai
Shawn Wu
Jiangang Luo
Tong Yu
Darcy Chen
Sean Wang
Xudong Zhao
Louie Li
Claire Wang
Hunter He
Carol Wang
Allen Wang
Main:8 Pages
4 Figures
Bibliography:2 Pages
8 Tables
Appendix:2 Pages
Abstract

Although Mixture-of-Experts (MoE) Large Language Models (LLMs) deliver superior accuracy with a reduced number of active parameters, their pre-training represents a significant computationally bottleneck due to underutilized experts and limited training efficiency. This work introduces a Layer-Adaptive Expert Pruning (LAEP) algorithm designed for the pre-training stage of MoE LLMs. In contrast to previous expert pruning approaches that operate primarily in the post-training phase, the proposed algorithm enhances training efficiency by selectively pruning underutilized experts and reorganizing experts across computing devices according to token distribution statistics. Comprehensive experiments demonstrate that LAEP effectively reduces model size and substantially improves pre-training efficiency. In particular, when pre-training the 1010B Base model from scratch, LAEP achieves a 48.3\% improvement in training efficiency alongside a 33.3% parameter reduction, while still delivering excellent performance across multiple domains.

View on arXiv
Comments on this paper