Assigning Distinct Roles to Quantized and Low-Rank Matrices Toward Optimal Weight Decomposition
- MQ

Decomposing weight matrices into quantization and low-rank components () is a widely used technique for compressing large language models (LLMs). Existing joint optimization methods iteratively alternate between quantization and low-rank approximation. However, these methods tend to prioritize one component at the expense of the other, resulting in suboptimal decompositions that fail to leverage each component's unique strengths. In this work, we introduce Outlier-Driven Low-Rank Initialization (ODLRI), which assigns low-rank components the specific role of capturing activation-sensitive weights. This structured decomposition mitigates outliers' negative impact on quantization, enabling more effective balance between quantization and low-rank approximation. Experiments on Llama2 (7B, 13B, 70B), Llama3-8B, and Mistral-7B demonstrate that incorporating ODLRI into the joint optimization framework consistently reduces activation-aware error, minimizes quantization scale, and improves perplexity and zero-shot accuracy in low-bit settings.
View on arXiv@article{cho2025_2506.02077, title={ Assigning Distinct Roles to Quantized and Low-Rank Matrices Toward Optimal Weight Decomposition }, author={ Yoonjun Cho and Soeun Kim and Dongjae Jeon and Kyelim Lee and Beomsoo Lee and Albert No }, journal={arXiv preprint arXiv:2506.02077}, year={ 2025 } }