264
v1v2v3 (latest)

HBLLM: Wavelet-Enhanced High-Fidelity 1-Bit Quantization for LLMs

Ningning Chen
Weicai Ye
Ying Jiang
Main:10 Pages
2 Figures
Bibliography:1 Pages
4 Tables
Appendix:7 Pages
Abstract

We introduce HBLLM, a wavelet-enhanced high-fidelity 11-bit post-training quantization method for Large Language Models (LLMs). By leveraging Haar wavelet transforms to enhance expressive capacity through frequency decomposition, HBLLM significantly improves quantization fidelity while maintaining minimal overhead. This approach features two innovative structure-aware grouping strategies: (1) frequency-aware multi-parameter intra-row grouping and (2) 2\ell_2-norm-based saliency-driven column selection. For non-salient weights, a shared mean is employed across quantization groups within each frequency band to optimize storage efficiency. Experiments conducted on the OPT and LLaMA models demonstrate that HBLLM achieves state-of-the-art performance in 11-bit quantization, attaining a perplexity of 6.716.71 on LLaMA22-1313B with an average weight storage of only 1.081.08 bits. Code available at:this https URL.

View on arXiv
Comments on this paper