ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2501.07113
81
0
v1v2 (latest)

Matching Free Depth Recovery from Structured Light

13 January 2025
Zhuohang Yu
Kai Wang
Jing Zhang
    3DV
ArXiv (abs)PDFHTML
Main:10 Pages
10 Figures
Bibliography:3 Pages
1 Tables
Abstract

We present a novel approach for depth estimation from images captured by structured light systems. Unlike many previous methods that rely on image matching process, our approach uses a density voxel grid to represent scene geometry, which is trained via self-supervised differentiable volume rendering. Our method leverages color fields derived from projected patterns in structured light systems during the rendering process, enabling the isolated optimization of the geometry field. This contributes to faster convergence and high-quality output. Additionally, we incorporate normalized device coordinates (NDC), a distortion loss, and a novel surface-based color loss to enhance geometric fidelity. Experimental results demonstrate that our method outperforms existing matching-based techniques in geometric performance for few-shot scenarios, achieving approximately a 60% reduction in average estimated depth errors on synthetic scenes and about 30% on real-world captured scenes. Furthermore, our approach delivers fast training, with a speed roughly three times faster than previous matching-free methods that employ implicit representations.

View on arXiv
@article{yu2025_2501.07113,
  title={ Matching-Free Depth Recovery from Structured Light },
  author={ Zhuohang Yu and Kai Wang and Kun Huang and Juyong Zhang },
  journal={arXiv preprint arXiv:2501.07113},
  year={ 2025 }
}
Comments on this paper