ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.10433
33
0

MonoDiff9D: Monocular Category-Level 9D Object Pose Estimation via Diffusion Model

14 April 2025
Jian Liu
Wei Sun
Hui Yang
Jin Zheng
Zichen Geng
Hossein Rahmani
Ajmal Mian
    DiffM
ArXivPDFHTML
Abstract

Object pose estimation is a core means for robots to understand and interact with their environment. For this task, monocular category-level methods are attractive as they require only a single RGB camera. However, current methods rely on shape priors or CAD models of the intra-class known objects. We propose a diffusion-based monocular category-level 9D object pose generation method, MonoDiff9D. Our motivation is to leverage the probabilistic nature of diffusion models to alleviate the need for shape priors, CAD models, or depth sensors for intra-class unknown object pose estimation. We first estimate coarse depth via DINOv2 from the monocular image in a zero-shot manner and convert it into a point cloud. We then fuse the global features of the point cloud with the input image and use the fused features along with the encoded time step to condition MonoDiff9D. Finally, we design a transformer-based denoiser to recover the object pose from Gaussian noise. Extensive experiments on two popular benchmark datasets show that MonoDiff9D achieves state-of-the-art monocular category-level 9D object pose estimation accuracy without the need for shape priors or CAD models at any stage. Our code will be made public atthis https URL.

View on arXiv
@article{liu2025_2504.10433,
  title={ MonoDiff9D: Monocular Category-Level 9D Object Pose Estimation via Diffusion Model },
  author={ Jian Liu and Wei Sun and Hui Yang and Jin Zheng and Zichen Geng and Hossein Rahmani and Ajmal Mian },
  journal={arXiv preprint arXiv:2504.10433},
  year={ 2025 }
}
Comments on this paper