22
0

Turning LLM Activations Quantization-Friendly

Main:5 Pages
5 Figures
Bibliography:1 Pages
Abstract

Quantization effectively reduces the serving costs of Large Language Models (LLMs) by speeding up data movement through compressed parameters and enabling faster operations via integer arithmetic. However, activating integer arithmetic requires quantizing both weights and activations, which poses challenges due to the significant outliers in LLMs that increase quantization error. In this work, we investigate these outliers with an emphasis on their effect on layer-wise quantization error, then examine how smoothing and rotation transform the observed values. Our primary contributions include introducing a new metric to measure and visualize quantization difficulty based on channel magnitudes, as well as proposing a hybrid approach that applies channel-wise scaling before rotation, supported by a mathematical formulation of its benefits.

View on arXiv
@article{czakó2025_2506.01967,
  title={ Turning LLM Activations Quantization-Friendly },
  author={ Patrik Czakó and Gábor Kertész and Sándor Szénási },
  journal={arXiv preprint arXiv:2506.01967},
  year={ 2025 }
}
Comments on this paper