ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.05139
109
2

Every FLOP Counts: Scaling a 300B Mixture-of-Experts LING LLM without Premium GPUs

7 March 2025
Ling Team
B. Zeng
Chenyu Huang
Chao Zhang
Changxin Tian
C. Chen
Dingnan Jin
Feng Yu
Feng Zhu
Feng Yuan
Fakang Wang
G. Wang
Guangyao Zhai
Haitao Zhang
Huizhong Li
Jun Zhou
Jia-Ling Liu
Junpeng Fang
Junjie Ou
Jun Hu
Ji Luo
Jun Zhang
Jian Liu
Jian Sha
Jianxue Qian
Jian Wu
Junping Zhao
J. Li
Jubao Feng
Jingchao Di
Junming Xu
J. Yao
Kuan Xu
Kewei Du
Longfei Li
Lei Liang
Lu Yu
Li Tang
Lin Ju
Peng Xu
Daixin Wang
Song Liu
Shicheng Li
Shri Kiran Srinivasan
Song Yan
Tengwei Cai
Tianyi Chen
Ting Guo
Ting Huang
Tao Feng
Tao Wu
Wei Wu
Xiaolu Zhang
Xiaoyu Yang
Xin Zhao
Xiaobo Hu
Xin Lin
Yao Zhao
Yijiao Wang
Yongzhen Guo
Yansen Wang
Yue Yang
Yang Cao
Yuhao Fu
Y. Xiong
Yongbin Li
Zhe Li
Qing Cui
Ziqi Liu
Zhaoxin Huan
Zujie Wen
Zhenhang Sun
Zhuoxuan Du
Z. He
    MoE
    ALM
ArXivPDFHTML
Abstract

In this technical report, we tackle the challenges of training large-scale Mixture of Experts (MoE) models, focusing on overcoming cost inefficiency and resource limitations prevalent in such systems. To address these issues, we present two differently sized MoE large language models (LLMs), namely Ling-Lite and Ling-Plus (referred to as "Bailing" in Chinese, spelled Bǎilíng in Pinyin). Ling-Lite contains 16.8 billion parameters with 2.75 billion activated parameters, while Ling-Plus boasts 290 billion parameters with 28.8 billion activated parameters. Both models exhibit comparable performance to leading industry benchmarks. This report offers actionable insights to improve the efficiency and accessibility of AI development in resource-constrained settings, promoting more scalable and sustainable technologies. Specifically, to reduce training costs for large-scale MoE models, we propose innovative methods for (1) optimization of model architecture and training processes, (2) refinement of training anomaly handling, and (3) enhancement of model evaluation efficiency. Additionally, leveraging high-quality data generated from knowledge graphs, our models demonstrate superior capabilities in tool use compared to other models. Ultimately, our experimental findings demonstrate that a 300B MoE LLM can be effectively trained on lower-performance devices while achieving comparable performance to models of a similar scale, including dense and MoE models. Compared to high-performance devices, utilizing a lower-specification hardware system during the pre-training phase demonstrates significant cost savings, reducing computing costs by approximately 20%. The models can be accessed atthis https URL.

View on arXiv
@article{team2025_2503.05139,
  title={ Every FLOP Counts: Scaling a 300B Mixture-of-Experts LING LLM without Premium GPUs },
  author={ Ling Team and Binwei Zeng and Chao Huang and Chao Zhang and Changxin Tian and Cong Chen and Dingnan Jin and Feng Yu and Feng Zhu and Feng Yuan and Fakang Wang and Gangshan Wang and Guangyao Zhai and Haitao Zhang and Huizhong Li and Jun Zhou and Jia Liu and Junpeng Fang and Junjie Ou and Jun Hu and Ji Luo and Ji Zhang and Jian Liu and Jian Sha and Jianxue Qian and Jiewei Wu and Junping Zhao and Jianguo Li and Jubao Feng and Jingchao Di and Junming Xu and Jinghua Yao and Kuan Xu and Kewei Du and Longfei Li and Lei Liang and Lu Yu and Li Tang and Lin Ju and Peng Xu and Qing Cui and Song Liu and Shicheng Li and Shun Song and Song Yan and Tengwei Cai and Tianyi Chen and Ting Guo and Ting Huang and Tao Feng and Tao Wu and Wei Wu and Xiaolu Zhang and Xueming Yang and Xin Zhao and Xiaobo Hu and Xin Lin and Yao Zhao and Yilong Wang and Yongzhen Guo and Yuanyuan Wang and Yue Yang and Yang Cao and Yuhao Fu and Yi Xiong and Yanzhe Li and Zhe Li and Zhiqiang Zhang and Ziqi Liu and Zhaoxin Huan and Zujie Wen and Zhenhang Sun and Zhuoxuan Du and Zhengyu He },
  journal={arXiv preprint arXiv:2503.05139},
  year={ 2025 }
}
Comments on this paper