ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1804.03599
9
822

Understanding disentangling in βββ-VAE

10 April 2018
Christopher P. Burgess
I. Higgins
Arka Pal
Loic Matthey
Nicholas Watters
Guillaume Desjardins
Alexander Lerchner
    CoGe
    DRL
ArXivPDFHTML
Abstract

We present new intuitions and theoretical assessments of the emergence of disentangled representation in variational autoencoders. Taking a rate-distortion theory perspective, we show the circumstances under which representations aligned with the underlying generative factors of variation of data emerge when optimising the modified ELBO bound in β\betaβ-VAE, as training progresses. From these insights, we propose a modification to the training regime of β\betaβ-VAE, that progressively increases the information capacity of the latent code during training. This modification facilitates the robust learning of disentangled representations in β\betaβ-VAE, without the previous trade-off in reconstruction accuracy.

View on arXiv
Comments on this paper