40
0

MGD3^3: Mode-Guided Dataset Distillation using Diffusion Models

Main:9 Pages
13 Figures
Bibliography:2 Pages
11 Tables
Appendix:5 Pages
Abstract

Dataset distillation has emerged as an effective strategy, significantly reducing training costs and facilitating more efficient model deployment. Recent advances have leveraged generative models to distill datasets by capturing the underlying data distribution. Unfortunately, existing methods require model fine-tuning with distillation losses to encourage diversity and representativeness. However, these methods do not guarantee sample diversity, limiting their performance. We propose a mode-guided diffusion model leveraging a pre-trained diffusion model without the need to fine-tune with distillation losses. Our approach addresses dataset diversity in three stages: Mode Discovery to identify distinct data modes, Mode Guidance to enhance intra-class diversity, and Stop Guidance to mitigate artifacts in synthetic samples that affect performance. Our approach outperforms state-of-the-art methods, achieving accuracy gains of 4.4%, 2.9%, 1.6%, and 1.6% on ImageNette, ImageIDC, ImageNet-100, and ImageNet-1K, respectively. Our method eliminates the need for fine-tuning diffusion models with distillation losses, significantly reducing computational costs. Our code is available on the project webpage:this https URL

View on arXiv
@article{chan-santiago2025_2505.18963,
  title={ MGD$^3$: Mode-Guided Dataset Distillation using Diffusion Models },
  author={ Jeffrey A. Chan-Santiago and Praveen Tirupattur and Gaurav Kumar Nayak and Gaowen Liu and Mubarak Shah },
  journal={arXiv preprint arXiv:2505.18963},
  year={ 2025 }
}
Comments on this paper