Transient objects in video sequences can significantly degrade the quality of 3D scene reconstructions. To address this challenge, we propose T-3DGS, a novel framework that robustly filters out transient distractors during 3D reconstruction using Gaussian Splatting. Our framework consists of two steps. First, we employ an unsupervised classification network that distinguishes transient objects from static scene elements by leveraging their distinct training dynamics within the reconstruction process. Second, we refine these initial detections by integrating an off-the-shelf segmentation method with a bidirectional tracking module, which together enhance boundary accuracy and temporal coherence. Evaluations on both sparsely and densely captured video datasets demonstrate that T-3DGS significantly outperforms state-of-the-art approaches, enabling high-fidelity 3D reconstructions in challenging, real-world scenarios.
View on arXiv@article{markin2025_2412.00155, title={ T-3DGS: Removing Transient Objects for 3D Scene Reconstruction }, author={ Alexander Markin and Vadim Pryadilshchikov and Artem Komarichev and Ruslan Rakhimov and Peter Wonka and Evgeny Burnaev }, journal={arXiv preprint arXiv:2412.00155}, year={ 2025 } }