ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.16003
25
9

Learning (Very) Simple Generative Models Is Hard

31 May 2022
Sitan Chen
Jungshian Li
Yuanzhi Li
ArXivPDFHTML
Abstract

Motivated by the recent empirical successes of deep generative models, we study the computational complexity of the following unsupervised learning problem. For an unknown neural network F:Rd→Rd′F:\mathbb{R}^d\to\mathbb{R}^{d'}F:Rd→Rd′, let DDD be the distribution over Rd′\mathbb{R}^{d'}Rd′ given by pushing the standard Gaussian N(0,Idd)\mathcal{N}(0,\textrm{Id}_d)N(0,Idd​) through FFF. Given i.i.d. samples from DDD, the goal is to output any distribution close to DDD in statistical distance. We show under the statistical query (SQ) model that no polynomial-time algorithm can solve this problem even when the output coordinates of FFF are one-hidden-layer ReLU networks with log⁡(d)\log(d)log(d) neurons. Previously, the best lower bounds for this problem simply followed from lower bounds for supervised learning and required at least two hidden layers and poly(d)\mathrm{poly}(d)poly(d) neurons [Daniely-Vardi '21, Chen-Gollakota-Klivans-Meka '22]. The key ingredient in our proof is an ODE-based construction of a compactly supported, piecewise-linear function fff with polynomially-bounded slopes such that the pushforward of N(0,1)\mathcal{N}(0,1)N(0,1) under fff matches all low-degree moments of N(0,1)\mathcal{N}(0,1)N(0,1).

View on arXiv
Comments on this paper