ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2412.11530
100
0

RoMeO: Robust Metric Visual Odometry

16 December 2024
JunDa Cheng
Z. Cai
Zhaoxing Zhang
Wei Yin
Matthias Müller
Michael Paulitsch
Xin Yang
ArXivPDFHTML
Abstract

Visual odometry (VO) aims to estimate camera poses from visual inputs -- a fundamental building block for many applications such as VR/AR and robotics. This work focuses on monocular RGB VO where the input is a monocular RGB video without IMU or 3D sensors. Existing approaches lack robustness under this challenging scenario and fail to generalize to unseen data (especially outdoors); they also cannot recover metric-scale poses. We propose Robust Metric Visual Odometry (RoMeO), a novel method that resolves these issues leveraging priors from pre-trained depth models. RoMeO incorporates both monocular metric depth and multi-view stereo (MVS) models to recover metric-scale, simplify correspondence search, provide better initialization and regularize optimization. Effective strategies are proposed to inject noise during training and adaptively filter noisy depth priors, which ensure the robustness of RoMeO on in-the-wild data. As shown in Fig.1, RoMeO advances the state-of-the-art (SOTA) by a large margin across 6 diverse datasets covering both indoor and outdoor scenes. Compared to the current SOTA DPVO, RoMeO reduces the relative (align the trajectory scale with GT) and absolute trajectory errors both by >50%. The performance gain also transfers to the full SLAM pipeline (with global BA & loop closure). Code will be released upon acceptance.

View on arXiv
@article{cheng2025_2412.11530,
  title={ RoMeO: Robust Metric Visual Odometry },
  author={ Junda Cheng and Zhipeng Cai and Zhaoxing Zhang and Wei Yin and Matthias Muller and Michael Paulitsch and Xin Yang },
  journal={arXiv preprint arXiv:2412.11530},
  year={ 2025 }
}
Comments on this paper