79
0

QwT-v2: Practical, Effective and Efficient Post-Training Quantization

Main:9 Pages
2 Figures
Bibliography:5 Pages
12 Tables
Appendix:1 Pages
Abstract

Network quantization is arguably one of the most practical network compression approaches for reducing the enormous resource consumption of modern deep neural networks. They usually require diverse and subtle design choices for specific architecture and tasks. Instead, the QwT method is a simple and general approach which introduces lightweight additional structures to improve quantization. But QwT incurs extra parameters and latency. More importantly, QwT is not compatible with many hardware platforms. In this paper, we propose QwT-v2, which not only enjoys all advantages of but also resolves major defects of QwT. By adopting a very lightweight channel-wise affine compensation (CWAC) module, QwT-v2 introduces significantly less extra parameters and computations compared to QwT, and at the same time matches or even outperforms QwT in accuracy. The compensation module of QwT-v2 can be integrated into quantization inference engines with little effort, which not only effectively removes the extra costs but also makes it compatible with most existing hardware platforms.

View on arXiv
@article{tang2025_2505.20932,
  title={ QwT-v2: Practical, Effective and Efficient Post-Training Quantization },
  author={ Ningyuan Tang and Minghao Fu and Hao Yu and Jianxin Wu },
  journal={arXiv preprint arXiv:2505.20932},
  year={ 2025 }
}
Comments on this paper