ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1811.11296
18
0

Taking Control of Intra-class Variation in Conditional GANs Under Weak Supervision

27 November 2018
Richard T. Marriott
S. Romdhani
Liming Luke Chen
    GAN
ArXivPDFHTML
Abstract

Generative Adversarial Networks (GANs) are able to learn mappings between simple, relatively low-dimensional, random distributions and points on the manifold of realistic images in image-space. The semantics of this mapping, however, are typically entangled such that meaningful image properties cannot be controlled independently of one another. Conditional GANs (cGANs) provide a potential solution to this problem, allowing specific semantics to be enforced during training. This solution, however, depends on the availability of precise labels, which are sometimes difficult or near impossible to obtain, e.g. labels representing lighting conditions or describing the background. In this paper we introduce a new formulation of the cGAN that is able to learn disentangled, multivariate models of semantically meaningful variation and which has the advantage of requiring only the weak supervision of binary attribute labels. For example, given only labels of ambient / non-ambient lighting, our method is able to learn multivariate lighting models disentangled from other factors such as the identity and pose. We coin the method intra-class variation isolation (IVI) and the resulting network the IVI-GAN. We evaluate IVI-GAN on the CelebA dataset and on synthetic 3D morphable model data, learning to disentangle attributes such as lighting, pose, expression, and even the background.

View on arXiv
Comments on this paper