NestQuant: Nested Lattice Quantization for Matrix Products and LLMs

Post-training quantization (PTQ) has emerged as a critical technique for efficient deployment of large language models (LLMs). This work proposes NestQuant, a novel PTQ scheme for weights and activations that is based on self-similar nested lattices. Recent work have mathematically shown such quantizers to be information-theoretically optimal for low-precision matrix multiplication. We implement a practical low-complexity version of NestQuant based on Gosset lattice, making it a drop-in quantizer for any matrix multiplication step (e.g., in self-attention, MLP etc). For example, NestQuant quantizes weights, KV-cache, and activations of Llama-3-8B to 4 bits, achieving perplexity of 6.6 on wikitext2. This represents more than 55% reduction in perplexity gap with respect to unquantized model (perplexity of 6.14) compared to state-of-the-art Meta's SpinQuant (perplexity 7.3). Comparisons on various LLM evaluation benchmarks also show a reduction in performance degradation induced by quantization.
View on arXiv@article{savkin2025_2502.09720, title={ NestQuant: Nested Lattice Quantization for Matrix Products and LLMs }, author={ Semyon Savkin and Eitan Porat and Or Ordentlich and Yury Polyanskiy }, journal={arXiv preprint arXiv:2502.09720}, year={ 2025 } }