ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2011.02402
21
25

On the Convergence of Gradient Descent in GANs: MMD GAN As a Gradient Flow

4 November 2020
Youssef Mroueh
Truyen V. Nguyen
ArXivPDFHTML
Abstract

We consider the maximum mean discrepancy (MMD\mathrm{MMD}MMD) GAN problem and propose a parametric kernelized gradient flow that mimics the min-max game in gradient regularized MMD\mathrm{MMD}MMD GAN. We show that this flow provides a descent direction minimizing the MMD\mathrm{MMD}MMD on a statistical manifold of probability distributions. We then derive an explicit condition which ensures that gradient descent on the parameter space of the generator in gradient regularized MMD\mathrm{MMD}MMD GAN is globally convergent to the target distribution. Under this condition, we give non asymptotic convergence results of gradient descent in MMD GAN. Another contribution of this paper is the introduction of a dynamic formulation of a regularization of MMD\mathrm{MMD}MMD and demonstrating that the parametric kernelized descent for MMD\mathrm{MMD}MMD is the gradient flow of this functional with respect to the new Riemannian structure. Our obtained theoretical result allows ones to treat gradient flows for quite general functionals and thus has potential applications to other types of variational inferences on a statistical manifold beyond GANs. Finally, numerical experiments suggest that our parametric kernelized gradient flow stabilizes GAN training and guarantees convergence.

View on arXiv
Comments on this paper