ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.06393
90
7

ααα-GAN: Convergence and Estimation Guarantees

12 May 2022
Gowtham R. Kurri
Monica Welfert
Tyler Sypherd
Lalitha Sankar
    GAN
ArXivPDFHTML
Abstract

We prove a two-way correspondence between the min-max optimization of general CPE loss function GANs and the minimization of associated fff-divergences. We then focus on α\alphaα-GAN, defined via the α\alphaα-loss, which interpolates several GANs (Hellinger, vanilla, Total Variation) and corresponds to the minimization of the Arimoto divergence. We show that the Arimoto divergences induced by α\alphaα-GAN equivalently converge, for all α∈R>0∪{∞}\alpha\in \mathbb{R}_{>0}\cup\{\infty\}α∈R>0​∪{∞}. However, under restricted learning models and finite samples, we provide estimation bounds which indicate diverse GAN behavior as a function of α\alphaα. Finally, we present empirical results on a toy dataset that highlight the practical utility of tuning the α\alphaα hyperparameter.

View on arXiv
Comments on this paper