ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.14008
7
0

Multi-Label Stereo Matching for Transparent Scene Depth Estimation

20 May 2025
Zhidan Liu
Chengtang Yao
Jiaxi Zeng
Yuwei Wu
Yunde Jia
    3DV
ArXivPDFHTML
Abstract

In this paper, we present a multi-label stereo matching method to simultaneously estimate the depth of the transparent objects and the occluded background in transparentthis http URLprevious methods that assume a unimodal distribution along the disparity dimension and formulate the matching as a single-label regression problem, we propose a multi-label regression formulation to estimate multiple depth values at the same pixel in transparent scenes. To resolve the multi-label regression problem, we introduce a pixel-wise multivariate Gaussian representation, where the mean vector encodes multiple depth values at the same pixel, and the covariance matrix determines whether a multi-label representation is necessary for a given pixel. The representation is iteratively predicted within a GRU framework. In each iteration, we first predict the update step for the mean parameters and then use both the update step and the updated mean parameters to estimate the covariance matrix. We also synthesize a dataset containing 10 scenes and 89 objects to validate the performance of transparent scene depth estimation. The experiments show that our method greatly improves the performance on transparent surfaces while preserving the background information for scene reconstruction. Code is available atthis https URL.

View on arXiv
@article{liu2025_2505.14008,
  title={ Multi-Label Stereo Matching for Transparent Scene Depth Estimation },
  author={ Zhidan Liu and Chengtang Yao and Jiaxi Zeng and Yuwei Wu and Yunde Jia },
  journal={arXiv preprint arXiv:2505.14008},
  year={ 2025 }
}
Comments on this paper