7
0

Monocular One-Shot Metric-Depth Alignment for RGB-Based Robot Grasping

Main:7 Pages
9 Figures
Bibliography:1 Pages
5 Tables
Abstract

Accurate 6D object pose estimation is a prerequisite for successfully completing robotic prehensile and non-prehensile manipulation tasks. At present, 6D pose estimation for robotic manipulation generally relies on depth sensors based on, e.g., structured light, time-of-flight, and stereo-vision, which can be expensive, produce noisy output (as compared with RGB cameras), and fail to handle transparent objects. On the other hand, state-of-the-art monocular depth estimation models (MDEMs) provide only affine-invariant depths up to an unknown scale and shift. Metric MDEMs achieve some successful zero-shot results on public datasets, but fail to generalize. We propose a novel framework, Monocular One-shot Metric-depth Alignment (MOMA), to recover metric depth from a single RGB image, through a one-shot adaptation building on MDEM techniques. MOMA performs scale-rotation-shift alignments during camera calibration, guided by sparse ground-truth depth points, enabling accurate depth estimation without additional data collection or model retraining on the testing setup. MOMA supports fine-tuning the MDEM on transparent objects, demonstrating strong generalization capabilities. Real-world experiments on tabletop 2-finger grasping and suction-based bin-picking applications show MOMA achieves high success rates in diverse tasks, confirming its effectiveness.

View on arXiv
@article{guo2025_2506.17110,
  title={ Monocular One-Shot Metric-Depth Alignment for RGB-Based Robot Grasping },
  author={ Teng Guo and Baichuan Huang and Jingjin Yu },
  journal={arXiv preprint arXiv:2506.17110},
  year={ 2025 }
}
Comments on this paper