Cross-View Multi-Modal Segmentation @ Ego-Exo4D Challenges 2025

In this report, we present a cross-view multi-modal object segmentation approach for the object correspondence task in the Ego-Exo4D Correspondence Challenges 2025. Given object queries from one perspective (e.g., ego view), the goal is to predict the corresponding object masks in another perspective (e.g., exo view). To tackle this task, we propose a multimodal condition fusion module that enhances object localization by leveraging both visual masks and textual descriptions as segmentation conditions. Furthermore, to address the visual domain gap between ego and exo views, we introduce a cross-view object alignment module that enforces object-level consistency across perspectives, thereby improving the model's robustness to viewpoint changes. Our proposed method ranked second on the leaderboard of the large-scale Ego-Exo4D object correspondence benchmark. Code will be made available atthis https URL.
View on arXiv@article{fu2025_2506.05856, title={ Cross-View Multi-Modal Segmentation @ Ego-Exo4D Challenges 2025 }, author={ Yuqian Fu and Runze Wang and Yanwei Fu and Danda Pani Paudel and Luc Van Gool }, journal={arXiv preprint arXiv:2506.05856}, year={ 2025 } }