ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2301.06626
13
0

Masked Vector Quantization

16 January 2023
David D. Nguyen
David Leibowitz
Surya Nepal
S. Kanhere
    MQ
ArXivPDFHTML
Abstract

Generative models with discrete latent representations have recently demonstrated an impressive ability to learn complex high-dimensional data distributions. However, their performance relies on a long sequence of tokens per instance and a large number of codebook entries, resulting in long sampling times and considerable computation to fit the categorical posterior. To address these issues, we propose the Masked Vector Quantization (MVQ) framework which increases the representational capacity of each code vector by learning mask configurations via a stochastic winner-takes-all training regime called Multiple Hypothese Dropout (MH-Dropout). On ImageNet 64×\times×64, MVQ reduces FID in existing vector quantization architectures by up to 68%68\%68% at 2 tokens per instance and 57%57\%57% at 5 tokens. These improvements widen as codebook entries is reduced and allows for 7–45×7\textit{--}45\times7–45× speed-up in token sampling during inference. As an additional benefit, we find that smaller latent spaces lead to MVQ identifying transferable visual representations where multiple can be smoothly combined.

View on arXiv
Comments on this paper