ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.21574
22
0

Do We Need All the Synthetic Data? Towards Targeted Synthetic Image Augmentation via Diffusion Models

27 May 2025
Dang Nguyen
Jiping Li
Jinghao Zheng
Baharan Mirzasoleiman
    DiffM
ArXiv (abs)PDFHTML
Main:9 Pages
5 Figures
Bibliography:4 Pages
3 Tables
Appendix:20 Pages
Abstract

Synthetically augmenting training datasets with diffusion models has been an effective strategy for improving generalization of image classifiers. However, existing techniques struggle to ensure the diversity of generation and increase the size of the data by up to 10-30x to improve the in-distribution performance. In this work, we show that synthetically augmenting part of the data that is not learned early in training outperforms augmenting the entire dataset. By analyzing a two-layer CNN, we prove that this strategy improves generalization by promoting homogeneity in feature learning speed without amplifying noise. Our extensive experiments show that by augmenting only 30%-40% of the data, our method boosts the performance by up to 2.8% in a variety of scenarios, including training ResNet, ViT and DenseNet on CIFAR-10, CIFAR-100, and TinyImageNet, with a range of optimizers including SGD and SAM. Notably, our method applied with SGD outperforms the SOTA optimizer, SAM, on CIFAR-100 and TinyImageNet. It can also easily stack with existing weak and strong augmentation strategies to further boost the performance.

View on arXiv
@article{nguyen2025_2505.21574,
  title={ Do We Need All the Synthetic Data? Towards Targeted Synthetic Image Augmentation via Diffusion Models },
  author={ Dang Nguyen and Jiping Li and Jinghao Zheng and Baharan Mirzasoleiman },
  journal={arXiv preprint arXiv:2505.21574},
  year={ 2025 }
}
Comments on this paper