ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.00870
46
0

Data-free Knowledge Distillation with Diffusion Models

1 April 2025
Xiaohua Qi
Renda Li
Long Peng
Q. Ling
Jun Yu
Ziyi Chen
Peng Chang
Mei Han
Jing Xiao
ArXivPDFHTML
Abstract

Recently Data-Free Knowledge Distillation (DFKD) has garnered attention and can transfer knowledge from a teacher neural network to a student neural network without requiring any access to training data. Although diffusion models are adept at synthesizing high-fidelity photorealistic images across various domains, existing methods cannot be easiliy implemented to DFKD. To bridge that gap, this paper proposes a novel approach based on diffusion models, DiffDFKD. Specifically, DiffDFKD involves targeted optimizations in two key areas. Firstly, DiffDFKD utilizes valuable information from teacher models to guide the pre-trained diffusion models' data synthesis, generating datasets that mirror the training data distribution and effectively bridge domain gaps. Secondly, to reduce computational burdens, DiffDFKD introduces Latent CutMix Augmentation, an efficient technique, to enhance the diversity of diffusion model-generated images for DFKD while preserving key attributes for effective knowledge transfer. Extensive experiments validate the efficacy of DiffDFKD, yielding state-of-the-art results exceeding existing DFKD approaches. We release our code atthis https URL.

View on arXiv
@article{qi2025_2504.00870,
  title={ Data-free Knowledge Distillation with Diffusion Models },
  author={ Xiaohua Qi and Renda Li and Long Peng and Qiang Ling and Jun Yu and Ziyi Chen and Peng Chang and Mei Han and Jing Xiao },
  journal={arXiv preprint arXiv:2504.00870},
  year={ 2025 }
}
Comments on this paper