Advancing Expert Specialization for Better MoE

Mixture-of-Experts (MoE) models enable efficient scaling of large language models (LLMs) by activating only a subset of experts per input. However, we observe that the commonly used auxiliary load balancing loss often leads to expert overlap and overly uniform routing, which hinders expert specialization and degrades overall performance during post-training. To address this, we propose a simple yet effective solution that introduces two complementary objectives: (1) an orthogonality loss to encourage experts to process distinct types of tokens, and (2) a variance loss to encourage more discriminative routing decisions. Gradient-level analysis demonstrates that these objectives are compatible with the existing auxiliary loss and contribute to optimizing the training process. Experimental results over various model architectures and across multiple benchmarks show that our method significantly enhances expert specialization. Notably, our method improves classic MoE baselines with auxiliary loss by up to 23.79%, while also maintaining load balancing in downstream tasks, without any architectural modifications or additional components. We will release our code to contribute to the community.
View on arXiv@article{guo2025_2505.22323, title={ Advancing Expert Specialization for Better MoE }, author={ Hongcan Guo and Haolang Lu and Guoshun Nan and Bolun Chu and Jialin Zhuang and Yuan Yang and Wenhao Che and Sicong Leng and Qimei Cui and Xudong Jiang }, journal={arXiv preprint arXiv:2505.22323}, year={ 2025 } }