ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.00336
33
3

Denoising Score Matching with Random Features: Insights on Diffusion Models from Precise Learning Curves

1 February 2025
Anand Jerry George
Rodrigo Veiga
Nicolas Macris
    DiffM
ArXiv (abs)PDFHTML
Abstract

We derive asymptotically precise expressions for test and train errors of denoising score matching (DSM) in generative diffusion models. The score function is parameterized by random features neural networks, with the target distribution being ddd-dimensional standard Gaussian. We operate in a regime where the dimension ddd, number of data samples nnn, and number of features ppp tend to infinity while keeping the ratios ψn=nd\psi_n=\frac{n}{d}ψn​=dn​ and ψp=pd\psi_p=\frac{p}{d}ψp​=dp​ fixed. By characterizing the test and train errors, we identify regimes of generalization and memorization in diffusion models. Furthermore, our work sheds light on the conditions enhancing either generalization or memorization. Consistent with prior empirical observations, our findings indicate that the model complexity (ppp) and the number of noise samples per data sample (mmm) used during DSM significantly influence generalization and memorization behaviors.

View on arXiv
@article{george2025_2502.00336,
  title={ Denoising Score Matching with Random Features: Insights on Diffusion Models from Precise Learning Curves },
  author={ Anand Jerry George and Rodrigo Veiga and Nicolas Macris },
  journal={arXiv preprint arXiv:2502.00336},
  year={ 2025 }
}
Comments on this paper