ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.05232
6
15

Realizing GANs via a Tunable Loss Function

9 June 2021
Gowtham R. Kurri
Tyler Sypherd
Lalitha Sankar
    GAN
ArXivPDFHTML
Abstract

We introduce a tunable GAN, called α\alphaα-GAN, parameterized by α∈(0,∞]\alpha \in (0,\infty]α∈(0,∞], which interpolates between various fff-GANs and Integral Probability Metric based GANs (under constrained discriminator set). We construct α\alphaα-GAN using a supervised loss function, namely, α\alphaα-loss, which is a tunable loss function capturing several canonical losses. We show that α\alphaα-GAN is intimately related to the Arimoto divergence, which was first proposed by \"{O}sterriecher (1996), and later studied by Liese and Vajda (2006). We also study the convergence properties of α\alphaα-GAN. We posit that the holistic understanding that α\alphaα-GAN introduces will have practical benefits of addressing both the issues of vanishing gradients and mode collapse.

View on arXiv
Comments on this paper