ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.16743
11
0

Noise-Informed Diffusion-Generated Image Detection with Anomaly Attention

20 June 2025
Weinan Guan
Wei Wang
Bo Peng
Ziwen He
Jing Dong
Haonan Cheng
    DiffM
ArXiv (abs)PDFHTML
Main:10 Pages
9 Figures
Bibliography:3 Pages
1 Tables
Abstract

With the rapid development of image generation technologies, especially the advancement of Diffusion Models, the quality of synthesized images has significantly improved, raising concerns among researchers about information security. To mitigate the malicious abuse of diffusion models, diffusion-generated image detection has proven to be an effectivethis http URL, a key challenge for forgery detection is generalising to diffusion models not seen during training. In this paper, we address this problem by focusing on image noise. We observe that images from different diffusion models share similar noise patterns, distinct from genuine images. Building upon this insight, we introduce a novel Noise-Aware Self-Attention (NASA) module that focuses on noise regions to capture anomalous patterns. To implement a SOTA detection model, we incorporate NASA into Swin Transformer, forming an novel detection architecture NASA-Swin. Additionally, we employ a cross-modality fusion embedding to combine RGB and noise images, along with a channel mask strategy to enhance feature learning from both modalities. Extensive experiments demonstrate the effectiveness of our approach in enhancing detection capabilities for diffusion-generated images. When encountering unseen generation methods, our approach achieves the state-of-the-artthis http URLcode is available atthis https URL.

View on arXiv
@article{guan2025_2506.16743,
  title={ Noise-Informed Diffusion-Generated Image Detection with Anomaly Attention },
  author={ Weinan Guan and Wei Wang and Bo Peng and Ziwen He and Jing Dong and Haonan Cheng },
  journal={arXiv preprint arXiv:2506.16743},
  year={ 2025 }
}
Comments on this paper