ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.00264
42
0

DiffDenoise: Self-Supervised Medical Image Denoising with Conditional Diffusion Models

31 March 2025
Basar Demir
Yikang Liu
Xiao Chen
Eric Z. Chen
Lin Zhao
Boris Mailhe
Terrence Chen
Shanhui Sun
    DiffM
    MedIm
ArXivPDFHTML
Abstract

Many self-supervised denoising approaches have been proposed in recent years. However, these methods tend to overly smooth images, resulting in the loss of fine structures that are essential for medical applications. In this paper, we propose DiffDenoise, a powerful self-supervised denoising approach tailored for medical images, designed to preserve high-frequency details. Our approach comprises three stages. First, we train a diffusion model on noisy images, using the outputs of a pretrained Blind-Spot Network as conditioning inputs. Next, we introduce a novel stabilized reverse sampling technique, which generates clean images by averaging diffusion sampling outputs initialized with a pair of symmetric noises. Finally, we train a supervised denoising network using noisy images paired with the denoised outputs generated by the diffusion model. Our results demonstrate that DiffDenoise outperforms existing state-of-the-art methods in both synthetic and real-world medical image denoising tasks. We provide both a theoretical foundation and practical insights, demonstrating the method's effectiveness across various medical imaging modalities and anatomical structures.

View on arXiv
@article{demir2025_2504.00264,
  title={ DiffDenoise: Self-Supervised Medical Image Denoising with Conditional Diffusion Models },
  author={ Basar Demir and Yikang Liu and Xiao Chen and Eric Z. Chen and Lin Zhao and Boris Mailhe and Terrence Chen and Shanhui Sun },
  journal={arXiv preprint arXiv:2504.00264},
  year={ 2025 }
}
Comments on this paper