ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.08782
55
0

Embedding Hidden Adversarial Capabilities in Pre-Trained Diffusion Models

5 April 2025
Lucas Beerens
D. Higham
    DiffM
    WIGM
ArXivPDFHTML
Abstract

We introduce a new attack paradigm that embeds hidden adversarial capabilities directly into diffusion models via fine-tuning, without altering their observable behavior or requiring modifications during inference. Unlike prior approaches that target specific images or adjust the generation process to produce adversarial outputs, our method integrates adversarial functionality into the model itself. The resulting tampered model generates high-quality images indistinguishable from those of the original, yet these images cause misclassification in downstream classifiers at a high rate. The misclassification can be targeted to specific output classes. Users can employ this compromised model unaware of its embedded adversarial nature, as it functions identically to a standard diffusion model. We demonstrate the effectiveness and stealthiness of our approach, uncovering a covert attack vector that raises new security concerns. These findings expose a risk arising from the use of externally-supplied models and highlight the urgent need for robust model verification and defense mechanisms against hidden threats in generative models. The code is available atthis https URL.

View on arXiv
@article{beerens2025_2504.08782,
  title={ Embedding Hidden Adversarial Capabilities in Pre-Trained Diffusion Models },
  author={ Lucas Beerens and Desmond J. Higham },
  journal={arXiv preprint arXiv:2504.08782},
  year={ 2025 }
}
Comments on this paper