52

Expert Divergence Learning for MoE-based Language Models

Jiaang Li
Haibin Chen
Langming Liu
Yujin Yuan
Yadao Wang
Yizhen Zhang
Chengting Yu
Xin Tong
Weidong Zhang
Shilei Liu
Wenbo Su
Bo Zheng
Main:9 Pages
10 Figures
Bibliography:3 Pages
9 Tables
Appendix:12 Pages
Abstract

The Mixture-of-Experts (MoE) architecture is a powerful technique for scaling language models, yet it often suffers from expert homogenization, where experts learn redundant functionalities, thereby limiting MoE's full potential. To address this, we introduce Expert Divergence Learning, a novel pre-training strategy that explicitly encourages functional specialization among experts. Our method incorporates a label-driven auxiliary loss that leverages domain labels inherent in pre-training corpora to maximize the Jensen-Shannon Divergence between the expert routing distributions of different data domains. This optimization objective guides the model to develop diverged routing policies for varied domains and closer routing policies for the same domain, which leads to emergent and organized expert specialization. We validate our approach by pre-training MoE models of up to 15 billion parameters from scratch. Experimental results demonstrate that models trained with Expert Divergence Learning not only achieve a lower language modeling loss but also exhibit significant performance improvements across a diverse range of downstream benchmarks. Further analysis confirms that our method effectively mitigates expert homogenization and brings greater functional specialization, all with negligible computational overhead during training.

View on arXiv
Comments on this paper