ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1911.10922
11
1

Towards Better Understanding of Disentangled Representations via Mutual Information

25 November 2019
Xiaojiang Yang
Wendong Bi
Yitong Sun
Yu Cheng
Junchi Yan
    CML
    CoGe
    DRL
ArXivPDFHTML
Abstract

Most existing works on disentangled representation learning are solely built upon an marginal independence assumption: all factors in disentangled representations should be statistically independent. This assumption is necessary but definitely not sufficient for the disentangled representations without additional inductive biases in the modeling process, which is shown theoretically in recent studies. We argue in this work that disentangled representations should be characterized by their relation with observable data. In particular, we formulate such a relation through the concept of mutual information: the mutual information between each factor of the disentangled representations and data should be invariant conditioned on values of the other factors. Together with the widely accepted independence assumption, we further bridge it with the conditional independence of factors in representations conditioned on data. Moreover, we note that conditional independence of latent variables has been imposed on most VAE-type models and InfoGAN due to the artificial choice of factorized approximate posterior q(\rvz∣\rvx)q(\rvz|\rvx)q(\rvz∣\rvx) in the encoders. Such an arrangement of encoders introduces a crucial inductive bias for disentangled representations. To demonstrate the importance of our proposed assumption and the related inductive bias, we show in experiments that violating the assumption leads to decline of disentanglement among factors in the learned representations.

View on arXiv
Comments on this paper