ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2501.06148
  4. Cited By
From discrete-time policies to continuous-time diffusion samplers: Asymptotic equivalences and faster training

From discrete-time policies to continuous-time diffusion samplers: Asymptotic equivalences and faster training

10 January 2025
Julius Berner
Lorenz Richter
Marcin Sendera
Jarrid Rector-Brooks
Nikolay Malkin
    OffRL
ArXiv (abs)PDFHTML

Papers citing "From discrete-time policies to continuous-time diffusion samplers: Asymptotic equivalences and faster training"

4 / 54 papers shown
Title
Neural Stochastic Differential Equations: Deep Latent Gaussian Models in
  the Diffusion Limit
Neural Stochastic Differential Equations: Deep Latent Gaussian Models in the Diffusion Limit
Belinda Tzen
Maxim Raginsky
DiffM
197
211
0
23 May 2019
Flow-based generative models for Markov chain Monte Carlo in lattice
  field theory
Flow-based generative models for Markov chain Monte Carlo in lattice field theory
M. S. Albergo
G. Kanwar
P. Shanahan
AI4CE
120
220
0
26 Apr 2019
Deep Unsupervised Learning using Nonequilibrium Thermodynamics
Deep Unsupervised Learning using Nonequilibrium Thermodynamics
Jascha Narain Sohl-Dickstein
Eric A. Weiss
Niru Maheswaranathan
Surya Ganguli
SyDaDiffM
365
7,054
0
12 Mar 2015
The No-U-Turn Sampler: Adaptively Setting Path Lengths in Hamiltonian
  Monte Carlo
The No-U-Turn Sampler: Adaptively Setting Path Lengths in Hamiltonian Monte Carlo
Matthew D. Hoffman
Andrew Gelman
241
4,326
0
18 Nov 2011
Previous
12