LIVEJoin the current RTAI Connect sessionJoin now

92
0

SIR-DIFF: Sparse Image Sets Restoration with Multi-View Diffusion Model

Abstract

The computer vision community has developed numerous techniques for digitally restoring true scene information from single-view degraded photographs, an important yet extremely ill-posed task. In this work, we tackle image restoration from a different perspective by jointly denoising multiple photographs of the same scene. Our core hypothesis is that degraded images capturing a shared scene contain complementary information that, when combined, better constrains the restoration problem. To this end, we implement a powerful multi-view diffusion model that jointly generates uncorrupted views by extracting rich information from multi-view relationships. Our experiments show that our multi-view approach outperforms existing single-view image and even video-based methods on image deblurring and super-resolution tasks. Critically, our model is trained to output 3D consistent images, making it a promising tool for applications requiring robust multi-view integration, such as 3D reconstruction or pose estimation.

View on arXiv
@article{mao2025_2503.14463,
  title={ SIR-DIFF: Sparse Image Sets Restoration with Multi-View Diffusion Model },
  author={ Yucheng Mao and Boyang Wang and Nilesh Kulkarni and Jeong Joon Park },
  journal={arXiv preprint arXiv:2503.14463},
  year={ 2025 }
}
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.