10
0

Boost Post-Training Quantization via Null Space Optimization for Large Language Models

Main:9 Pages
4 Figures
Bibliography:4 Pages
7 Tables
Appendix:4 Pages
Abstract

Existing post-training quantization methods for large language models (LLMs) offer remarkable success. However, the increasingly marginal performance gains suggest that existing quantization strategies are insufficient to support the development of more compressed models. To inspire new directions for future research, this paper introduces the concept of null space into LLMs quantization. We argue that the quantization error can be effectively alleviated by constraining the post-quantization weight perturbation to lie within the null space of input activations. To prove this idea, we propose a plug-and-play null space projection module for existing milestone PTQ baselines named Q2N. Specifically, we first design an efficient and accurate null space projection approximation method tailored to the characteristics of LLMs. Subsequently, we theoretically derive a closed-form solution for an equivalent vector of the obtained projection matrix, which satisfies practical inference condition while avoiding additional memory overhead. Extensive experiments are conducted on various state-of-the-art LLMs (LLaMA3, DeepSeek, Qwen3) and baselines, demonstrating the effectiveness of both our Q2N and the perspective of null space optimization for LLMs quantization. We view this paper the first step to further alleviate the quantization error based on the insights of null space, hoping it inspiring future researchers to design more advanced quantization methods. Codes are available atthis https URL.

View on arXiv
@article{zhao2025_2506.11044,
  title={ Boost Post-Training Quantization via Null Space Optimization for Large Language Models },
  author={ Jiaqi Zhao and Miao Zhang and Weili Guan and Liqiang Nie },
  journal={arXiv preprint arXiv:2506.11044},
  year={ 2025 }
}
Comments on this paper