ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.24305
35
0
v1v2v3 (latest)

SR3D: Unleashing Single-view 3D Reconstruction for Transparent and Specular Object Grasping

30 May 2025
Mingxu Zhang
Xiaoqi Li
Jiahui Xu
Kaichen Zhou
Hojin Bae
Yan Shen
Chuyan Xiong
Jiaming Liu
ArXiv (abs)PDFHTML
Main:6 Pages
4 Figures
Bibliography:2 Pages
1 Tables
Abstract

Recent advancements in 3D robotic manipulation have improved grasping of everyday objects, but transparent and specular materials remain challenging due to depth sensing limitations. While several 3D reconstruction and depth completion approaches address these challenges, they suffer from setup complexity or limited observation information utilization. To address this, leveraging the power of single view 3D object reconstruction approaches, we propose a training free framework SR3D that enables robotic grasping of transparent and specular objects from a single view observation. Specifically, given single view RGB and depth images, SR3D first uses the external visual models to generate 3D reconstructed object mesh based on RGB image. Then, the key idea is to determine the 3D object's pose and scale to accurately localize the reconstructed object back into its original depth corrupted 3D scene. Therefore, we propose view matching and keypoint matching mechanisms,which leverage both the 2D and 3D's inherent semantic and geometric information in the observation to determine the object's 3D state within the scene, thereby reconstructing an accurate 3D depth map for effective grasp detection. Experiments in both simulation and real world show the reconstruction effectiveness of SR3D.

View on arXiv
@article{zhang2025_2505.24305,
  title={ SR3D: Unleashing Single-view 3D Reconstruction for Transparent and Specular Object Grasping },
  author={ Mingxu Zhang and Xiaoqi Li and Jiahui Xu and Kaichen Zhou and Hojin Bae and Yan Shen and Chuyan Xiong and Hao Dong },
  journal={arXiv preprint arXiv:2505.24305},
  year={ 2025 }
}
Comments on this paper