ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.00434
52
16

MoDGS: Dynamic Gaussian Splatting from Casually-captured Monocular Videos with Depth Priors

1 June 2024
Qingming Liu
Yuan Liu
Jie-Chao Wang
Xianqiang Lv
Peng Wang
Wenping Wang
Junhui Hou
    3DGS
ArXivPDFHTML
Abstract

In this paper, we propose MoDGS, a new pipeline to render novel views of dy namic scenes from a casually captured monocular video. Previous monocular dynamic NeRF or Gaussian Splatting methods strongly rely on the rapid move ment of input cameras to construct multiview consistency but struggle to recon struct dynamic scenes on casually captured input videos whose cameras are either static or move slowly. To address this challenging task, MoDGS adopts recent single-view depth estimation methods to guide the learning of the dynamic scene. Then, a novel 3D-aware initialization method is proposed to learn a reasonable deformation field and a new robust depth loss is proposed to guide the learning of dynamic scene geometry. Comprehensive experiments demonstrate that MoDGS is able to render high-quality novel view images of dynamic scenes from just a casually captured monocular video, which outperforms state-of-the-art meth ods by a significant margin. The code will be publicly available.

View on arXiv
@article{liu2025_2406.00434,
  title={ MoDGS: Dynamic Gaussian Splatting from Casually-captured Monocular Videos with Depth Priors },
  author={ Qingming Liu and Yuan Liu and Jiepeng Wang and Xianqiang Lyv and Peng Wang and Wenping Wang and Junhui Hou },
  journal={arXiv preprint arXiv:2406.00434},
  year={ 2025 }
}
Comments on this paper