MonoSplat: Generalizable 3D Gaussian Splatting from Monocular Depth Foundation Models

Recent advances in generalizable 3D Gaussian Splatting have demonstrated promising results in real-time high-fidelity rendering without per-scene optimization, yet existing approaches still struggle to handle unfamiliar visual content during inference on novel scenes due to limited generalizability. To address this challenge, we introduce MonoSplat, a novel framework that leverages rich visual priors from pre-trained monocular depth foundation models for robust Gaussian reconstruction. Our approach consists of two key components: a Mono-Multi Feature Adapter that transforms monocular features into multi-view representations, coupled with an Integrated Gaussian Prediction module that effectively fuses both feature types for precise Gaussian generation. Through the Adapter's lightweight attention mechanism, features are seamlessly aligned and aggregated across views while preserving valuable monocular priors, enabling the Prediction module to generate Gaussian primitives with accurate geometry and appearance. Through extensive experiments on diverse real-world datasets, we convincingly demonstrate that MonoSplat achieves superior reconstruction quality and generalization capability compared to existing methods while maintaining computational efficiency with minimal trainable parameters. Codes are available atthis https URL.
View on arXiv@article{liu2025_2505.15185, title={ MonoSplat: Generalizable 3D Gaussian Splatting from Monocular Depth Foundation Models }, author={ Yifan Liu and Keyu Fan and Weihao Yu and Chenxin Li and Hao Lu and Yixuan Yuan }, journal={arXiv preprint arXiv:2505.15185}, year={ 2025 } }