ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.17316
46
0

Pow3R: Empowering Unconstrained 3D Reconstruction with Camera and Scene Priors

21 March 2025
Wonbong Jang
Philippe Weinzaepfel
Vincent Leroy
Lourdes Agapito
Jérôme Revaud
ArXivPDFHTML
Abstract

We present Pow3r, a novel large 3D vision regression model that is highly versatile in the input modalities it accepts. Unlike previous feed-forward models that lack any mechanism to exploit known camera or scene priors at test time, Pow3r incorporates any combination of auxiliary information such as intrinsics, relative pose, dense or sparse depth, alongside input images, within a single network. Building upon the recent DUSt3R paradigm, a transformer-based architecture that leverages powerful pre-training, our lightweight and versatile conditioning acts as additional guidance for the network to predict more accurate estimates when auxiliary information is available. During training we feed the model with random subsets of modalities at each iteration, which enables the model to operate under different levels of known priors at test time. This in turn opens up new capabilities, such as performing inference in native image resolution, or point-cloud completion. Our experiments on 3D reconstruction, depth completion, multi-view depth prediction, multi-view stereo, and multi-view pose estimation tasks yield state-of-the-art results and confirm the effectiveness of Pow3r at exploiting all available information. The project webpage isthis https URL.

View on arXiv
@article{jang2025_2503.17316,
  title={ Pow3R: Empowering Unconstrained 3D Reconstruction with Camera and Scene Priors },
  author={ Wonbong Jang and Philippe Weinzaepfel and Vincent Leroy and Lourdes Agapito and Jerome Revaud },
  journal={arXiv preprint arXiv:2503.17316},
  year={ 2025 }
}
Comments on this paper