ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.11397
27
3

Learning Low-Rank Representations for Model Compression

21 November 2022
Zezhou Zhu
Yucong Zhou
Zhaobai Zhong
    SSL
    MQ
ArXivPDFHTML
Abstract

Vector Quantization (VQ) is an appealing model compression method to obtain a tiny model with less accuracy loss. While methods to obtain better codebooks and codes under fixed clustering dimensionality have been extensively studied, optimizations of the vectors in favour of clustering performance are not carefully considered, especially via the reduction of vector dimensionality. This paper reports our recent progress on the combination of dimensionality compression and vector quantization, proposing a Low-Rank Representation Vector Quantization (LR2VQ\text{LR}^2\text{VQ}LR2VQ) method that outperforms previous VQ algorithms in various tasks and architectures. LR2VQ\text{LR}^2\text{VQ}LR2VQ joins low-rank representation with subvector clustering to construct a new kind of building block that is directly optimized through end-to-end training over the task loss. Our proposed design pattern introduces three hyper-parameters, the number of clusters kkk, the size of subvectors mmm and the clustering dimensionality d~\tilde{d}d~. In our method, the compression ratio could be directly controlled by mmm, and the final accuracy is solely determined by d~\tilde{d}d~. We recognize d~\tilde{d}d~ as a trade-off between low-rank approximation error and clustering error and carry out both theoretical analysis and experimental observations that empower the estimation of the proper d~\tilde{d}d~ before fine-tunning. With a proper d~\tilde{d}d~, we evaluate LR2VQ\text{LR}^2\text{VQ}LR2VQ with ResNet-18/ResNet-50 on ImageNet classification datasets, achieving 2.8\%/1.0\% top-1 accuracy improvements over the current state-of-the-art VQ-based compression algorithms with 43×\times×/31×\times× compression factor.

View on arXiv
Comments on this paper