37
0

Improved Diffusion-based Generative Model with Better Adversarial Robustness

Abstract

Diffusion Probabilistic Models (DPMs) have achieved significant success in generative tasks. However, their training and sampling processes suffer from the issue of distribution mismatch. During the denoising process, the input data distributions differ between the training and inference stages, potentially leading to inaccurate data generation. To obviate this, we analyze the training objective of DPMs and theoretically demonstrate that this mismatch can be alleviated through Distributionally Robust Optimization (DRO), which is equivalent to performing robustness-driven Adversarial Training (AT) on DPMs. Furthermore, for the recently proposed Consistency Model (CM), which distills the inference process of the DPM, we prove that its training objective also encounters the mismatch issue. Fortunately, this issue can be mitigated by AT as well. Based on these insights, we propose to conduct efficient AT on both DPM and CM. Finally, extensive empirical studies validate the effectiveness of AT in diffusion-based models. The code is available atthis https URL.

View on arXiv
@article{wang2025_2502.17099,
  title={ Improved Diffusion-based Generative Model with Better Adversarial Robustness },
  author={ Zekun Wang and Mingyang Yi and Shuchen Xue and Zhenguo Li and Ming Liu and Bing Qin and Zhi-Ming Ma },
  journal={arXiv preprint arXiv:2502.17099},
  year={ 2025 }
}
Comments on this paper