We present Bridging Geometric and Semantic (BriGeS), an effective method that fuses geometric and semantic information within foundation models to enhance Monocular Depth Estimation (MDE). Central to BriGeS is the Bridging Gate, which integrates the complementary strengths of depth and segmentation foundation models. This integration is further refined by our Attention Temperature Scaling technique. It finely adjusts the focus of the attention mechanisms to prevent over-concentration on specific features, thus ensuring balanced performance across diverse inputs. BriGeS capitalizes on pre-trained foundation models and adopts a strategy that focuses on training only the Bridging Gate. This method significantly reduces resource demands and training time while maintaining the model's ability to generalize effectively. Extensive experiments across multiple challenging datasets demonstrate that BriGeS outperforms state-of-the-art methods in MDE for complex scenes, effectively handling intricate structures and overlapping objects.
View on arXiv@article{ma2025_2505.23400, title={ Bridging Geometric and Semantic Foundation Models for Generalized Monocular Depth Estimation }, author={ Sanggyun Ma and Wonjoon Choi and Jihun Park and Jaeyeul Kim and Seunghun Lee and Jiwan Seo and Sunghoon Im }, journal={arXiv preprint arXiv:2505.23400}, year={ 2025 } }