ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.10857
29
0

ZeroGrasp: Zero-Shot Shape Reconstruction Enabled Robotic Grasping

15 April 2025
Shun Iwase
Zubair Irshad
Katherine Liu
Vitor Campagnolo Guizilini
Robert Lee
Takuya Ikeda
Ayako Amma
Koichi Nishiwaki
Kris M. Kitani
Rares Andrei Ambrus
Sergey Zakharov
ArXivPDFHTML
Abstract

Robotic grasping is a cornerstone capability of embodied systems. Many methods directly output grasps from partial information without modeling the geometry of the scene, leading to suboptimal motion and even collisions. To address these issues, we introduce ZeroGrasp, a novel framework that simultaneously performs 3D reconstruction and grasp pose prediction in near real-time. A key insight of our method is that occlusion reasoning and modeling the spatial relationships between objects is beneficial for both accurate reconstruction and grasping. We couple our method with a novel large-scale synthetic dataset, which comprises 1M photo-realistic images, high-resolution 3D reconstructions and 11.3B physically-valid grasp pose annotations for 12K objects from the Objaverse-LVIS dataset. We evaluate ZeroGrasp on the GraspNet-1B benchmark as well as through real-world robot experiments. ZeroGrasp achieves state-of-the-art performance and generalizes to novel real-world objects by leveraging synthetic data.

View on arXiv
@article{iwase2025_2504.10857,
  title={ ZeroGrasp: Zero-Shot Shape Reconstruction Enabled Robotic Grasping },
  author={ Shun Iwase and Zubair Irshad and Katherine Liu and Vitor Guizilini and Robert Lee and Takuya Ikeda and Ayako Amma and Koichi Nishiwaki and Kris Kitani and Rares Ambrus and Sergey Zakharov },
  journal={arXiv preprint arXiv:2504.10857},
  year={ 2025 }
}
Comments on this paper