v1v2 (latest)
With Shared Microexponents, A Little Shifting Goes a Long Way
International Symposium on Computer Architecture (ISCA), 2023
- MQ
Main:11 Pages
9 Figures
Bibliography:2 Pages
8 Tables
Abstract
This paper introduces Block Data Representations (BDR), a framework for exploring and evaluating a wide spectrum of narrow-precision formats for deep learning. It enables comparison of popular quantization standards, and through BDR, new formats based on shared microexponents (MX) are identified, which outperform other state-of-the-art quantization approaches, including narrow-precision floating-point and block floating-point. MX utilizes multiple levels of quantization scaling with ultra-fine scaling factors based on shared microexponents in the hardware. The effectiveness of MX is demonstrated on real-world models including large-scale generative pretraining and inferencing, and production-scale recommendation systems.
View on arXivComments on this paper
