ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.17728
37
0

CasualHDRSplat: Robust High Dynamic Range 3D Gaussian Splatting from Casually Captured Videos

24 April 2025
Shucheng Gong
Lingzhe Zhao
Wenpu Li
Hong Xie
Yin Zhang
Shiyu Zhao
Peidong Liu
    3DGS
ArXivPDFHTML
Abstract

Recently, photo-realistic novel view synthesis from multi-view images, such as neural radiance field (NeRF) and 3D Gaussian Splatting (3DGS), have garnered widespread attention due to their superior performance. However, most works rely on low dynamic range (LDR) images, which limits the capturing of richer scene details. Some prior works have focused on high dynamic range (HDR) scene reconstruction, typically require capturing of multi-view sharp images with different exposure times at fixed camera positions during exposure times, which is time-consuming and challenging in practice. For a more flexible data acquisition, we propose a one-stage method: \textbf{CasualHDRSplat} to easily and robustly reconstruct the 3D HDR scene from casually captured videos with auto-exposure enabled, even in the presence of severe motion blur and varying unknown exposure time. \textbf{CasualHDRSplat} contains a unified differentiable physical imaging model which first applies continuous-time trajectory constraint to imaging process so that we can jointly optimize exposure time, camera response function (CRF), camera poses, and sharp 3D HDR scene. Extensive experiments demonstrate that our approach outperforms existing methods in terms of robustness and rendering quality. Our source code will be available atthis https URL

View on arXiv
@article{gong2025_2504.17728,
  title={ CasualHDRSplat: Robust High Dynamic Range 3D Gaussian Splatting from Casually Captured Videos },
  author={ Shucheng Gong and Lingzhe Zhao and Wenpu Li and Hong Xie and Yin Zhang and Shiyu Zhao and Peidong Liu },
  journal={arXiv preprint arXiv:2504.17728},
  year={ 2025 }
}
Comments on this paper