ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2007.07013
9
1

Pose2RGBD. Generating Depth and RGB images from absolute positions

14 July 2020
Mihai Cristian Pîrvu
    3DH
ArXivPDFHTML
Abstract

We propose a method at the intersection of Computer Vision and Computer Graphics fields, which automatically generates RGBD images using neural networks, based on previously seen and synchronized video, depth and pose signals. Since the models must be able to reconstruct both texture (RGB) and structure (Depth), it creates an implicit representation of the scene, as opposed to explicit ones, such as meshes or point clouds. The process can be thought of as neural rendering, where we obtain a function f : Pose -> RGBD, which we can use to navigate through the generated scene, similarly to graphics simulations. We introduce two new datasets, one based on synthetic data with full ground truth information, while the other one being recorded from a drone flight in an university campus, using only video and GPS signals. Finally, we propose a fully unsupervised method of generating datasets from videos alone, in order to train the Pose2RGBD networks. Code and datasets are available at:: https://gitlab.com/mihaicristianpirvu/pose2rgbd.

View on arXiv
Comments on this paper