29
3

Implicit Manifold Learning on Generative Adversarial Networks

Abstract

This paper raises an implicit manifold learning perspective in Generative Adversarial Networks (GANs), by studying how the support of the learned distribution, modelled as a submanifold Mθ\mathcal{M}_{\theta}, perfectly match with Mr\mathcal{M}_{r}, the support of the real data distribution. We show that optimizing Jensen-Shannon divergence forces Mθ\mathcal{M}_{\theta} to perfectly match with Mr\mathcal{M}_{r}, while optimizing Wasserstein distance does not. On the other hand, by comparing the gradients of the Jensen-Shannon divergence and the Wasserstein distances (W1W_1 and W22W_2^2) in their primal forms, we conjecture that Wasserstein W22W_2^2 may enjoy desirable properties such as reduced mode collapse. It is therefore interesting to design new distances that inherit the best from both distances.

View on arXiv
Comments on this paper