8

STaMP: Sequence Transformation and Mixed Precision for Low-Precision Activation Quantization

Marco Federici
Riccardo Del Chiaro
Boris van Breugel
Paul Whatmough
Markus Nagel
Main:10 Pages
10 Figures
Bibliography:3 Pages
5 Tables
Appendix:8 Pages
Abstract

Quantization is the key method for reducing inference latency, power and memory footprint of generative AI models. However, accuracy often degrades sharply when activations are quantized below eight bits. Recent work suggests that invertible linear transformations (e.g. rotations) can aid quantization, by reparameterizing feature channels and weights. In this paper, we propose \textit{Sequence Transformation and Mixed Precision} (STaMP) quantization, a novel strategy that applies linear transformations along the \textit{sequence} dimension to exploit the strong local correlation in language and visual data. By keeping a small number of tokens in each intermediate activation at higher precision, we can maintain model accuracy at lower (average) activations bit-widths. We evaluate STaMP on recent LVM and LLM architectures, demonstrating that it significantly improves low bit width activation quantization and complements established activation and weight quantization methods including recent feature transformations.

View on arXiv
Comments on this paper