ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.17537
60
2

On the Vulnerability of Concept Erasure in Diffusion Models

24 February 2025
Lucas Beerens
Alex D. Richardson
Kaixuan Zhang
Dongdong Chen
    DiffM
ArXivPDFHTML
Abstract

The proliferation of text-to-image diffusion models has raised significant privacy and security concerns, particularly regarding the generation of copyrighted or harmful images. To address these issues, research on machine unlearning has developed various concept erasure methods, which aim to remove the effect of unwanted data through post-hoc training. However, we show these erasure techniques are vulnerable, where images of supposedly erased concepts can still be generated using adversarially crafted prompts. We introduce RECORD, a coordinate-descent-based algorithm that discovers prompts capable of eliciting the generation of erased content. We demonstrate that RECORD significantly beats the attack success rate of current state-of-the-art attack methods. Furthermore, our findings reveal that models subjected to concept erasure are more susceptible to adversarial attacks than previously anticipated, highlighting the urgency for more robust unlearning approaches. We open source all our code atthis https URL

View on arXiv
@article{beerens2025_2502.17537,
  title={ On the Vulnerability of Concept Erasure in Diffusion Models },
  author={ Lucas Beerens and Alex D. Richardson and Kaicheng Zhang and Dongdong Chen },
  journal={arXiv preprint arXiv:2502.17537},
  year={ 2025 }
}
Comments on this paper