ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.11267
26
2

Neural Implicit Manifold Learning for Topology-Aware Density Estimation

22 June 2022
Brendan Leigh Ross
G. Loaiza-Ganem
Anthony L. Caterini
Jesse C. Cresswell
    AI4CE
ArXivPDFHTML
Abstract

Natural data observed in Rn\mathbb{R}^nRn is often constrained to an mmm-dimensional manifold M\mathcal{M}M, where m<nm < nm<n. This work focuses on the task of building theoretically principled generative models for such data. Current generative models learn M\mathcal{M}M by mapping an mmm-dimensional latent variable through a neural network fθ:Rm→Rnf_\theta: \mathbb{R}^m \to \mathbb{R}^nfθ​:Rm→Rn. These procedures, which we call pushforward models, incur a straightforward limitation: manifolds cannot in general be represented with a single parameterization, meaning that attempts to do so will incur either computational instability or the inability to learn probability densities within the manifold. To remedy this problem, we propose to model M\mathcal{M}M as a neural implicit manifold: the set of zeros of a neural network. We then learn the probability density within M\mathcal{M}M with a constrained energy-based model, which employs a constrained variant of Langevin dynamics to train and sample from the learned manifold. In experiments on synthetic and natural data, we show that our model can learn manifold-supported distributions with complex topologies more accurately than pushforward models.

View on arXiv
Comments on this paper