11
0

Taming Diffusion for Dataset Distillation with High Representativeness

Abstract

Recent deep learning models demand larger datasets, driving the need for dataset distillation to create compact, cost-efficient datasets while maintaining performance. Due to the powerful image generation capability of diffusion, it has been introduced to this field for generating distilled images. In this paper, we systematically investigate issues present in current diffusion-based dataset distillation methods, including inaccurate distribution matching, distribution deviation with random noise, and separate sampling. Building on this, we propose D^3HR, a novel diffusion-based framework to generate distilled datasets with high representativeness. Specifically, we adopt DDIM inversion to map the latents of the full dataset from a low-normality latent domain to a high-normality Gaussian domain, preserving information and ensuring structural consistency to generate representative latents for the distilled dataset. Furthermore, we propose an efficient sampling scheme to better align the representative latents with the high-normality Gaussian distribution. Our comprehensive experiments demonstrate that D^3HR can achieve higher accuracy across different model architectures compared with state-of-the-art baselines in dataset distillation. Source code:this https URL.

View on arXiv
@article{zhao2025_2505.18399,
  title={ Taming Diffusion for Dataset Distillation with High Representativeness },
  author={ Lin Zhao and Yushu Wu and Xinru Jiang and Jianyang Gu and Yanzhi Wang and Xiaolin Xu and Pu Zhao and Xue Lin },
  journal={arXiv preprint arXiv:2505.18399},
  year={ 2025 }
}
Comments on this paper