ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.11695
12
0

Qronos: Correcting the Past by Shaping the Future... in Post-Training Quantization

16 May 2025
Shihao Zhang
Haoyu Zhang
Ian Colbert
Rayan Saab
    MQ
ArXivPDFHTML
Abstract

We introduce Qronos -- a new state-of-the-art post-training quantization algorithm that sequentially rounds and updates neural network weights. Qronos not only explicitly corrects errors due to both weight and activation quantization, but also errors resulting from quantizing previous layers. Our iterative algorithm is based on an interpretable and disciplined optimization framework that subsumes and surpasses existing data-driven approaches. At each step, Qronos alternates between error correction and diffusion via optimal update rules. Importantly, we prove that Qronos admits an efficient implementation that uses the Cholesky decomposition for solving least-squares problems. We also demonstrate that Qronos is compatible with existing transformation techniques such as Hadamard-based incoherence processing and weight-activation scaling equalization, among others. We evaluate Qronos using recent autoregressive language generation models in the Llama3 family; Qronos consistently outperforms previous state-of-the-art adaptive rounding methods when quantizing the weights, activations, and/or KV caches.

View on arXiv
@article{zhang2025_2505.11695,
  title={ Qronos: Correcting the Past by Shaping the Future... in Post-Training Quantization },
  author={ Shihao Zhang and Haoyu Zhang and Ian Colbert and Rayan Saab },
  journal={arXiv preprint arXiv:2505.11695},
  year={ 2025 }
}
Comments on this paper