ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.15082
31
24

M6-T: Exploring Sparse Expert Models and Beyond

31 May 2021
An Yang
Junyang Lin
Rui Men
Chang Zhou
Le Jiang
Xianyan Jia
Ang Wang
Jie Zhang
Jiamang Wang
Yong Li
Dingyang Zhang
Wei Lin
Lin Qu
Jingren Zhou
Hongxia Yang
    MoE
ArXivPDFHTML
Abstract

Mixture-of-Experts (MoE) models can achieve promising results with outrageous large amount of parameters but constant computation cost, and thus it has become a trend in model scaling. Still it is a mystery how MoE layers bring quality gains by leveraging the parameters with sparse activation. In this work, we investigate several key factors in sparse expert models. We observe that load imbalance may not be a significant problem affecting model quality, contrary to the perspectives of recent studies, while the number of sparsely activated experts kkk and expert capacity CCC in top-kkk routing can significantly make a difference in this context. Furthermore, we take a step forward to propose a simple method called expert prototyping that splits experts into different prototypes and applies kkk top-111 routing. This strategy improves the model quality but maintains constant computational costs, and our further exploration on extremely large-scale models reflects that it is more effective in training larger models. We push the model scale to over 111 trillion parameters and implement it on solely 480480480 NVIDIA V100-32GB GPUs, in comparison with the recent SOTAs on 204820482048 TPU cores. The proposed giant model achieves substantial speedup in convergence over the same-size baseline.

View on arXiv
Comments on this paper