ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.06432
181
0

Prompt-SID: Learning Structural Representation Prompt via Latent Diffusion for Single-Image Denoising

10 February 2025
Huaqiu Li
Wang Zhang
Xiaowan Hu
Tao Jiang
Zikang Chen
Haoqian Wang
    DiffM
ArXivPDFHTML
Abstract

Many studies have concentrated on constructing supervised models utilizing paired datasets for image denoising, which proves to be expensive and time-consuming. Current self-supervised and unsupervised approaches typically rely on blind-spot networks or sub-image pairs sampling, resulting in pixel information loss and destruction of detailed structural information, thereby significantly constraining the efficacy of such methods. In this paper, we introduce Prompt-SID, a prompt-learning-based single image denoising framework that emphasizes preserving of structural details. This approach is trained in a self-supervised manner using downsampled image pairs. It captures original-scale image information through structural encoding and integrates this prompt into the denoiser. To achieve this, we propose a structural representation generation model based on the latent diffusion process and design a structural attention module within the transformer-based denoiser architecture to decode the prompt. Additionally, we introduce a scale replay training mechanism, which effectively mitigates the scale gap from images of different resolutions. We conduct comprehensive experiments on synthetic, real-world, and fluorescence imaging datasets, showcasing the remarkable effectiveness of Prompt-SID. Our code will be released atthis https URL.

View on arXiv
@article{li2025_2502.06432,
  title={ Prompt-SID: Learning Structural Representation Prompt via Latent Diffusion for Single-Image Denoising },
  author={ Huaqiu Li and Wang Zhang and Xiaowan Hu and Tao Jiang and Zikang Chen and Haoqian Wang },
  journal={arXiv preprint arXiv:2502.06432},
  year={ 2025 }
}
Comments on this paper