ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2501.02423
94
2
v1v2v3 (latest)

Scaling Laws for Floating Point Quantization Training

5 January 2025
Xingwu Sun
Shuaipeng Li
Ruobing Xie
Weidong Han
Kan Wu
Zhen Yang
Yixing Li
An Wang
Shuai Li
Jinbao Xue
Yu Cheng
Yangyu Tao
Zhanhui Kang
C. Xu
Di Wang
Jie Jiang
    MQAIFin
ArXiv (abs)PDFHTML
Main:8 Pages
14 Figures
Bibliography:3 Pages
3 Tables
Appendix:16 Pages
Abstract

Low-precision training is considered an effective strategy for reducing both training and downstream inference costs. Previous scaling laws for precision mainly focus on integer quantization, which pay less attention to the constituents in floating-point quantization and thus cannot well fit the LLM losses in this scenario. In contrast, while floating-point quantization training is more commonly implemented in production, the research on it has been relatively superficial. In this paper, we thoroughly explore the effects of floating-point quantization targets, exponent bits, mantissa bits, and the calculation granularity of the scaling factor in floating-point quantization training performance of LLM models. While presenting an accurate floating-point quantization unified scaling law, we also provide valuable suggestions for the community: (1) Exponent bits contribute slightly more to the model performance than mantissa bits. We provide the optimal exponent-mantissa bit ratio for different bit numbers, which is available for future reference by hardware manufacturers; (2) We discover the formation of the critical data size in low-precision LLM training. Too much training data exceeding the critical data size will inversely bring in degradation of LLM performance; (3) The optimal floating-point quantization precision is directly proportional to the computational power, but within a wide computational power range, we estimate that the best cost-performance precision lies between 4-8 bits.

View on arXiv
@article{sun2025_2501.02423,
  title={ Scaling Laws for Floating Point Quantization Training },
  author={ Xingwu Sun and Shuaipeng Li and Ruobing Xie and Weidong Han and Kan Wu and Zhen Yang and Yixing Li and An Wang and Shuai Li and Jinbao Xue and Yu Cheng and Yangyu Tao and Zhanhui Kang and Chengzhong Xu and Di Wang and Jie Jiang },
  journal={arXiv preprint arXiv:2501.02423},
  year={ 2025 }
}
Comments on this paper