ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.10038
13
0

Ambient Diffusion Omni: Training Good Models with Bad Data

10 June 2025
Giannis Daras
Adrian Rodriguez-Munoz
Adam R. Klivans
Antonio Torralba
Constantinos Daskalakis
ArXiv (abs)PDFHTML
Main:13 Pages
28 Figures
Bibliography:4 Pages
9 Tables
Appendix:17 Pages
Abstract

We show how to use low-quality, synthetic, and out-of-distribution images to improve the quality of a diffusion model. Typically, diffusion models are trained on curated datasets that emerge from highly filtered data pools from the Web and other sources. We show that there is immense value in the lower-quality images that are often discarded. We present Ambient Diffusion Omni, a simple, principled framework to train diffusion models that can extract signal from all available images during training. Our framework exploits two properties of natural images -- spectral power law decay and locality. We first validate our framework by successfully training diffusion models with images synthetically corrupted by Gaussian blur, JPEG compression, and motion blur. We then use our framework to achieve state-of-the-art ImageNet FID, and we show significant improvements in both image quality and diversity for text-to-image generative modeling. The core insight is that noise dampens the initial skew between the desired high-quality distribution and the mixed distribution we actually observe. We provide rigorous theoretical justification for our approach by analyzing the trade-off between learning from biased data versus limited unbiased data across diffusion times.

View on arXiv
@article{daras2025_2506.10038,
  title={ Ambient Diffusion Omni: Training Good Models with Bad Data },
  author={ Giannis Daras and Adrian Rodriguez-Munoz and Adam Klivans and Antonio Torralba and Constantinos Daskalakis },
  journal={arXiv preprint arXiv:2506.10038},
  year={ 2025 }
}
Comments on this paper