ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.01901
39
2

Identifying Sensitive Weights via Post-quantization Integral

28 February 2025
Yuezhou Hu
Weiyu Huang
Zichen Liang
Cheng Chen
Jintao Zhang
Jun Zhu
Jianfei Chen
    MQ
ArXivPDFHTML
Abstract

Serving Large Language Models (LLMs) is costly. However, post-training weight quantization can address this problem by both compressing their sizes for limited memory and saving bandwidth for acceleration. As not all weight dimensions are equally important, those methods typically rely on a sensitivity metric, which indicates the element-wise influence of weights on loss function and is used to preprocess original weights for better quantization. In this work, we conduct an empirical study on the accuracy of the sensitivity metric, and find that existing gradient and Hessian based metrics are very inaccurate: they underestimate quantization's impact on the loss function by orders of magnitude, mainly due to the small convergence radius of local 2nd order approximation, \ie, gradient and Hessian term in Taylor's formula. To tackle this problem, we propose Post-quantization Integral (PQI), an accurate metric to estimate posterior sensitivity in a fine-grained manner. To leverage this accurate metric, we further propose ReQuant, a simple yet powerful framework that mainly consists of two Dense-and-Sparse detach components: self-adaptive outlier selection and step-wise significant weights detach. Results show that ReQuant boosts state-of-the-art post-training quantization methods, with a pronounced improvement of 2.66 perplexity gain on Llama 3.2 1B with QTIP.

View on arXiv
@article{hu2025_2503.01901,
  title={ Identifying Sensitive Weights via Post-quantization Integral },
  author={ Yuezhou Hu and Weiyu Huang and Zichen Liang and Chang Chen and Jintao Zhang and Jun Zhu and Jianfei Chen },
  journal={arXiv preprint arXiv:2503.01901},
  year={ 2025 }
}
Comments on this paper