ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1912.05101
61
31
v1v2v3 (latest)

Deep Direct Visual Odometry

11 December 2019
Chaoqiang Zhao
Yang Tang
Qiyu Sun
A. Vasilakos
ArXiv (abs)PDFHTML
Abstract

Traditional monocular direct visual odometry (DVO) is one of the most famous methods to estimate the ego-motion of robots as well as map the environment from images simultaneously. However, DVO heavily relies on high-quality images and accurate initial pose estimation during tracking, which means that DVO may fail if the image quality is poor or the initial value is incorrect. With the outstanding performance of deep learning, like image analysis and processing, previous works have shown that deep neural networks can effectively learn the 6-DOF pose between frames from large volumes of image sequences in an unsupervised manner. However, these unsupervised deep learning-based frameworks cannot accurately generate the full trajectory of a long monocular video because of the scale-inconsistency between each pose. To tackle this problem, we take several measures to improve the scale-consistency of our network (TrajNet), including improving the previous loss function and proposing a novel scale-to-trajectory constraint. Besides, considering the lack of mapping thread in deep learning-based visual odometry (VO), a new architecture, called deep direct sparse odometry (DDSO), is proposed to overcome the limitations of DVO as well as the mapping of deep learning-based VO by embedding our TrajNet into DVO. Expensive experiments on the KITTI dataset show that the proposed network achieves an outstanding performance on full trajectory prediction when compared with previous unsupervised monocular methods, and the integration with our TrajNet makes the initialization and tracking of DVO more robust and accurate.

View on arXiv
Comments on this paper