ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.03575
33
17

Analysis of learning a flow-based generative model from limited sample complexity

5 October 2023
Hugo Cui
Florent Krzakala
Eric Vanden-Eijnden
Lenka Zdeborová
    DRL
ArXivPDFHTML
Abstract

We study the problem of training a flow-based generative model, parametrized by a two-layer autoencoder, to sample from a high-dimensional Gaussian mixture. We provide a sharp end-to-end analysis of the problem. First, we provide a tight closed-form characterization of the learnt velocity field, when parametrized by a shallow denoising auto-encoder trained on a finite number nnn of samples from the target distribution. Building on this analysis, we provide a sharp description of the corresponding generative flow, which pushes the base Gaussian density forward to an approximation of the target density. In particular, we provide closed-form formulae for the distance between the mean of the generated mixture and the mean of the target mixture, which we show decays as Θn(1n)\Theta_n(\frac{1}{n})Θn​(n1​). Finally, this rate is shown to be in fact Bayes-optimal.

View on arXiv
Comments on this paper