ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.06563
46
24

Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models

3 June 2024
Tianwen Wei
Bo Zhu
Liang Zhao
Cheng Cheng
Biye Li
Weiwei Lü
Peng Cheng
Jianhao Zhang
Xiaoyu Zhang
Liang Zeng
Xiaokun Wang
Yutuan Ma
Rui Hu
Shuicheng Yan
Han Fang
Yahui Zhou
    MoE
ArXivPDFHTML
Abstract

In this technical report, we introduce the training methodologies implemented in the development of Skywork-MoE, a high-performance mixture-of-experts (MoE) large language model (LLM) with 146 billion parameters and 16 experts. It is initialized from the pre-existing dense checkpoints of our Skywork-13B model. We explore the comparative effectiveness of upcycling versus training from scratch initializations. Our findings suggest that the choice between these two approaches should consider both the performance of the existing dense checkpoints and the MoE training budget. We highlight two innovative techniques: gating logit normalization, which improves expert diversification, and adaptive auxiliary loss coefficients, allowing for layer-specific adjustment of auxiliary loss coefficients. Our experimental results validate the effectiveness of these methods. Leveraging these techniques and insights, we trained our upcycled Skywork-MoE on a condensed subset of our SkyPile corpus. The evaluation results demonstrate that our model delivers strong performance across a wide range of benchmarks.

View on arXiv
Comments on this paper