22
3

Minimax Optimal Quantization of Linear Models: Information-Theoretic Limits and Efficient Algorithms

Abstract

High-dimensional models often have a large memory footprint and must be quantized after training before being deployed on resource-constrained edge devices for inference tasks. In this work, we develop an information-theoretic framework for the problem of quantizing a linear regressor learned from training data (X,y)(\mathbf{X}, \mathbf{y}), for some underlying statistical relationship y=Xθ+v\mathbf{y} = \mathbf{X}\boldsymbol{\theta} + \mathbf{v}. The learned model, which is an estimate of the latent parameter θRd\boldsymbol{\theta} \in \mathbb{R}^d, is constrained to be representable using only BdBd bits, where B(0,)B \in (0, \infty) is a pre-specified budget and dd is the dimension. We derive an information-theoretic lower bound for the minimax risk under this setting and propose a matching upper bound using randomized embedding-based algorithms which is tight up to constant factors. The lower and upper bounds together characterize the minimum threshold bit-budget required to achieve a performance risk comparable to the unquantized setting. We also propose randomized Hadamard embeddings that are computationally efficient and are optimal up to a mild logarithmic factor of the lower bound. Our model quantization strategy can be generalized and we show its efficacy by extending the method and upper-bounds to two-layer ReLU neural networks for non-linear regression. Numerical simulations show the improved performance of our proposed scheme as well as its closeness to the lower bound.

View on arXiv
Comments on this paper