88
12

Erasing Undesirable Concepts in Diffusion Models with Adversarial Preservation

Main:8 Pages
14 Figures
Bibliography:4 Pages
8 Tables
Appendix:17 Pages
Abstract

Diffusion models excel at generating visually striking content from text but can inadvertently produce undesirable or harmful content when trained on unfiltered internet data. A practical solution is to selectively removing target concepts from the model, but this may impact the remaining concepts. Prior approaches have tried to balance this by introducing a loss term to preserve neutral content or a regularization term to minimize changes in the model parameters, yet resolving this trade-off remains challenging. In this work, we propose to identify and preserving concepts most affected by parameter changes, termed as \textit{adversarial concepts}. This approach ensures stable erasure with minimal impact on the other concepts. We demonstrate the effectiveness of our method using the Stable Diffusion model, showing that it outperforms state-of-the-art erasure methods in eliminating unwanted content while maintaining the integrity of other unrelated elements. Our code is available at \url{this https URL}.

View on arXiv
@article{bui2025_2410.15618,
  title={ Erasing Undesirable Concepts in Diffusion Models with Adversarial Preservation },
  author={ Anh Bui and Long Vuong and Khanh Doan and Trung Le and Paul Montague and Tamas Abraham and Dinh Phung },
  journal={arXiv preprint arXiv:2410.15618},
  year={ 2025 }
}
Comments on this paper