ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.04109
37
0

One2Any: One-Reference 6D Pose Estimation for Any Object

7 May 2025
Mengya Liu
Siyuan Li
Ajad Chhatkuli
Prune Truong
Luc Van Gool
Federico Tombari
ArXivPDFHTML
Abstract

6D object pose estimation remains challenging for many applications due to dependencies on complete 3D models, multi-view images, or training limited to specific object categories. These requirements make generalization to novel objects difficult for which neither 3D models nor multi-view images may be available. To address this, we propose a novel method One2Any that estimates the relative 6-degrees of freedom (DOF) object pose using only a single reference-single query RGB-D image, without prior knowledge of its 3D model, multi-view data, or category constraints. We treat object pose estimation as an encoding-decoding process, first, we obtain a comprehensive Reference Object Pose Embedding (ROPE) that encodes an object shape, orientation, and texture from a single reference view. Using this embedding, a U-Net-based pose decoding module produces Reference Object Coordinate (ROC) for new views, enabling fast and accurate pose estimation. This simple encoding-decoding framework allows our model to be trained on any pair-wise pose data, enabling large-scale training and demonstrating great scalability. Experiments on multiple benchmark datasets demonstrate that our model generalizes well to novel objects, achieving state-of-the-art accuracy and robustness even rivaling methods that require multi-view or CAD inputs, at a fraction of compute.

View on arXiv
@article{liu2025_2505.04109,
  title={ One2Any: One-Reference 6D Pose Estimation for Any Object },
  author={ Mengya Liu and Siyuan Li and Ajad Chhatkuli and Prune Truong and Luc Van Gool and Federico Tombari },
  journal={arXiv preprint arXiv:2505.04109},
  year={ 2025 }
}
Comments on this paper