ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.08015
18
0

4DGT: Learning a 4D Gaussian Transformer Using Real-World Monocular Videos

9 June 2025
Zhen Xu
Zhengqin Li
Zhao Dong
Xiaowei Zhou
Richard Newcombe
Zhaoyang Lv
    3DGSViT
ArXiv (abs)PDFHTML
Main:12 Pages
5 Figures
Bibliography:4 Pages
2 Tables
Abstract

We propose 4DGT, a 4D Gaussian-based Transformer model for dynamic scene reconstruction, trained entirely on real-world monocular posed videos. Using 4D Gaussian as an inductive bias, 4DGT unifies static and dynamic components, enabling the modeling of complex, time-varying environments with varying object lifespans. We proposed a novel density control strategy in training, which enables our 4DGT to handle longer space-time input and remain efficient rendering at runtime. Our model processes 64 consecutive posed frames in a rolling-window fashion, predicting consistent 4D Gaussians in the scene. Unlike optimization-based methods, 4DGT performs purely feed-forward inference, reducing reconstruction time from hours to seconds and scaling effectively to long video sequences. Trained only on large-scale monocular posed video datasets, 4DGT can outperform prior Gaussian-based networks significantly in real-world videos and achieve on-par accuracy with optimization-based methods on cross-domain videos. Project page:this https URL

View on arXiv
@article{xu2025_2506.08015,
  title={ 4DGT: Learning a 4D Gaussian Transformer Using Real-World Monocular Videos },
  author={ Zhen Xu and Zhengqin Li and Zhao Dong and Xiaowei Zhou and Richard Newcombe and Zhaoyang Lv },
  journal={arXiv preprint arXiv:2506.08015},
  year={ 2025 }
}
Comments on this paper