59
0

Spatial RoboGrasp: Generalized Robotic Grasping Control Policy

Main:9 Pages
7 Figures
Bibliography:3 Pages
2 Tables
Appendix:1 Pages
Abstract

Achieving generalizable and precise robotic manipulation across diverse environments remains a critical challenge, largely due to limitations in spatial perception. While prior imitation-learning approaches have made progress, their reliance on raw RGB inputs and handcrafted features often leads to overfitting and poor 3D reasoning under varied lighting, occlusion, and object conditions. In this paper, we propose a unified framework that couples robust multimodal perception with reliable grasp prediction. Our architecture fuses domain-randomized augmentation, monocular depth estimation, and a depth-aware 6-DoF Grasp Prompt into a single spatial representation for downstream action planning. Conditioned on this encoding and a high-level task prompt, our diffusion-based policy yields precise action sequences, achieving up to 40% improvement in grasp success and 45% higher task success rates under environmental variation. These results demonstrate that spatially grounded perception, paired with diffusion-based imitation learning, offers a scalable and robust solution for general-purpose robotic grasping.

View on arXiv
@article{huang2025_2505.20814,
  title={ Spatial RoboGrasp: Generalized Robotic Grasping Control Policy },
  author={ Yiqi Huang and Travis Davies and Jiahuan Yan and Jiankai Sun and Xiang Chen and Luhui Hu },
  journal={arXiv preprint arXiv:2505.20814},
  year={ 2025 }
}
Comments on this paper