ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.03713
88
0

PlückeRF: A Line-based 3D Representation for Few-view Reconstruction

4 June 2025
Sam Bahrami
Dylan Campbell
ArXiv (abs)PDFHTML
Main:8 Pages
7 Figures
Bibliography:2 Pages
3 Tables
Appendix:3 Pages
Abstract

Feed-forward 3D reconstruction methods aim to predict the 3D structure of a scene directly from input images, providing a faster alternative to per-scene optimization approaches. Significant progress has been made in single-view and few-view reconstruction using learned priors that infer object shape and appearance, even for unobserved regions. However, there is substantial potential to enhance these methods by better leveraging information from multiple views when available. To address this, we propose a few-view reconstruction model that more effectively harnesses multi-view information. Our approach introduces a simple mechanism that connects the 3D representation with pixel rays from the input views, allowing for preferential sharing of information between nearby 3D locations and between 3D locations and nearby pixel rays. We achieve this by defining the 3D representation as a set of structured, feature-augmented lines; the PlückeRF representation. Using this representation, we demonstrate improvements in reconstruction quality over the equivalent triplane representation and state-of-the-art feedforward reconstruction methods.

View on arXiv
@article{bahrami2025_2506.03713,
  title={ PlückeRF: A Line-based 3D Representation for Few-view Reconstruction },
  author={ Sam Bahrami and Dylan Campbell },
  journal={arXiv preprint arXiv:2506.03713},
  year={ 2025 }
}
Comments on this paper