ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.16626
25
1

Ascend HiFloat8 Format for Deep Learning

25 September 2024
Yuanyong Luo
Zhongxing Zhang
Richard Wu
Hu Liu
Ying Jin
Kai Zheng
Minmin Wang
Zhanying He
Guipeng Hu
Luyao Chen
Tianchi Hu
Junsong Wang
Minqi Chen
Mikhaylov Dmitry
Korviakov Vladimir
Bobrin Maxim
Yuhao Hu
Guanfu Chen
Zeyi Huang
    MQ
ArXivPDFHTML
Abstract

This preliminary white paper proposes a novel 8-bit floating-point data format HiFloat8 (abbreviated as HiF8) for deep learning. HiF8 features tapered precision. For normal value encoding, it provides 7 exponent values with 3-bit mantissa, 8 exponent values with 2-bit mantissa, and 16 exponent values with 1-bit mantissa. For denormal value encoding, it extends the dynamic range by 7 extra powers of 2, from 31 to 38 binades (notice that FP16 covers 40 binades). Meanwhile, HiF8 encodes all the special values except that positive zero and negative zero are represented by only one bit-pattern. Thanks to the better balance between precision and dynamic range, HiF8 can be simultaneously used in both forward and backward passes of AI training. In this paper, we will describe the definition and rounding methods of HiF8, as well as the tentative training and inference solutions. To demonstrate the efficacy of HiF8, massive simulation results on various neural networks, including traditional neural networks and large language models (LLMs), will also be presented.

View on arXiv
Comments on this paper