ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.02804
23
9

Exploring the Potential of Low-bit Training of Convolutional Neural Networks

4 June 2020
Kai Zhong
Xuefei Ning
Guohao Dai
Zhenhua Zhu
Tianchen Zhao
Shulin Zeng
Yu Wang
Huazhong Yang
    MQ
ArXivPDFHTML
Abstract

In this work, we propose a low-bit training framework for convolutional neural networks, which is built around a novel multi-level scaling (MLS) tensor format. Our framework focuses on reducing the energy consumption of convolution operations by quantizing all the convolution operands to low bit-width format. Specifically, we propose the MLS tensor format, in which the element-wise bit-width can be largely reduced. Then, we describe the dynamic quantization and the low-bit tensor convolution arithmetic to leverage the MLS tensor format efficiently. Experiments show that our framework achieves a superior trade-off between the accuracy and the bit-width than previous low-bit training frameworks. For training a variety of models on CIFAR-10, using 1-bit mantissa and 2-bit exponent is adequate to keep the accuracy loss within 1%1\%1%. And on larger datasets like ImageNet, using 4-bit mantissa and 2-bit exponent is adequate to keep the accuracy loss within 1%1\%1%. Through the energy consumption simulation of the computing units, we can estimate that training a variety of models with our framework could achieve 8.3∼10.2×8.3\sim10.2\times8.3∼10.2× and 1.9∼2.3×1.9\sim2.3\times1.9∼2.3× higher energy efficiency than training with full-precision and 8-bit floating-point arithmetic, respectively.

View on arXiv
Comments on this paper