ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.12552
68
0

MTGS: Multi-Traversal Gaussian Splatting

16 March 2025
Tianyu Li
Yihang Qiu
Zhenhua Wu
Carl Lindström
Peng Su
Matthias Nießner
Hongyang Li
    3DGS
ArXivPDFHTML
Abstract

Multi-traversal data, commonly collected through daily commutes or by self-driving fleets, provides multiple viewpoints for scene reconstruction within a road block. This data offers significant potential for high-quality novel view synthesis, which is crucial for applications such as autonomous vehicle simulators. However, inherent challenges in multi-traversal data often result in suboptimal reconstruction quality, including variations in appearance and the presence of dynamic objects. To address these issues, we propose Multi-Traversal Gaussian Splatting (MTGS), a novel approach that reconstructs high-quality driving scenes from arbitrarily collected multi-traversal data by modeling a shared static geometry while separately handling dynamic elements and appearance variations. Our method employs a multi-traversal dynamic scene graph with a shared static node and traversal-specific dynamic nodes, complemented by color correction nodes with learnable spherical harmonics coefficient residuals. This approach enables high-fidelity novel view synthesis and provides flexibility to navigate any viewpoint. We conduct extensive experiments on a large-scale driving dataset, nuPlan, with multi-traversal data. Our results demonstrate that MTGS improves LPIPS by 23.5% and geometry accuracy by 46.3% compared to single-traversal baselines. The code and data would be available to the public.

View on arXiv
@article{li2025_2503.12552,
  title={ MTGS: Multi-Traversal Gaussian Splatting },
  author={ Tianyu Li and Yihang Qiu and Zhenhua Wu and Carl Lindström and Peng Su and Matthias Nießner and Hongyang Li },
  journal={arXiv preprint arXiv:2503.12552},
  year={ 2025 }
}
Comments on this paper