46

StableQAT: Stable Quantization-Aware Training at Ultra-Low Bitwidths

Tianyi Chen
Sihan Chen
Xiaoyi Qu
Dan Zhao
Ruomei Yan
Jongwoo Ko
Luming Liang
Pashmina Cameron
Main:8 Pages
8 Figures
Bibliography:2 Pages
3 Tables
Appendix:10 Pages
Abstract

Quantization-aware training (QAT) is essential for deploying large models under strict memory and latency constraints, yet achieving stable and robust optimization at ultra-low bitwidths remains challenging. Common approaches based on the straight-through estimator (STE) or soft quantizers often suffer from gradient mismatch, instability, or high computational overhead. As such, we propose StableQAT, a unified and efficient QAT framework that stabilizes training in ultra low-bit settings via a novel, lightweight, and theoretically grounded surrogate for backpropagation derived from a discrete Fourier analysis of the rounding operator. StableQAT strictly generalizes STE as the latter arises as a special case of our more expressive surrogate family, yielding smooth, bounded, and inexpensive gradients that improve QAT training performance and stability across various hyperparameter choices. In experiments, StableQAT exhibits stable and efficient QAT at 2-4 bit regimes, demonstrating improved training stability, robustness, and superior performance with negligible training overhead against standard QAT techniques. Our code is available atthis https URL.

View on arXiv
Comments on this paper