The success of autoregressive models largely depends on the effectiveness of vector quantization, a technique that discretizes continuous features by mapping them to the nearest code vectors within a learnable codebook. Two critical issues in existing vector quantization methods are training instability and codebook collapse. Training instability arises from the gradient discrepancy introduced by the straight-through estimator, especially in the presence of significant quantization errors, while codebook collapse occurs when only a small subset of code vectors are utilized during training. A closer examination of these issues reveals that they are primarily driven by a mismatch between the distributions of the features and code vectors, leading to unrepresentative code vectors and significant data information loss during compression. To address this, we employ the Wasserstein distance to align these two distributions, achieving near 100\% codebook utilization and significantly reducing the quantization error. Both empirical and theoretical analyses validate the effectiveness of the proposed approach.
View on arXiv@article{fang2025_2506.15078, title={ Enhancing Vector Quantization with Distributional Matching: A Theoretical and Empirical Study }, author={ Xianghong Fang and Litao Guo and Hengchao Chen and Yuxuan Zhang and XiaofanXia and Dingjie Song and Yexin Liu and Hao Wang and Harry Yang and Yuan Yuan and Qiang Sun }, journal={arXiv preprint arXiv:2506.15078}, year={ 2025 } }