ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.15185
36
0

3D Occupancy Prediction with Low-Resolution Queries via Prototype-aware View Transformation

19 March 2025
Gyeongrok Oh
Sungjune Kim
Heeju Ko
Hyung-Gun Chi
J. Kim
Dongwook Lee
Daehyun Ji
Sungjoon Choi
Sujin Jang
Sangpil Kim
ArXivPDFHTML
Abstract

The resolution of voxel queries significantly influences the quality of view transformation in camera-based 3D occupancy prediction. However, computational constraints and the practical necessity for real-time deployment require smaller query resolutions, which inevitably leads to an information loss. Therefore, it is essential to encode and preserve rich visual details within limited query sizes while ensuring a comprehensive representation of 3D occupancy. To this end, we introduce ProtoOcc, a novel occupancy network that leverages prototypes of clustered image segments in view transformation to enhance low-resolution context. In particular, the mapping of 2D prototypes onto 3D voxel queries encodes high-level visual geometries and complements the loss of spatial information from reduced query resolutions. Additionally, we design a multi-perspective decoding strategy to efficiently disentangle the densely compressed visual cues into a high-dimensional 3D occupancy scene. Experimental results on both Occ3D and SemanticKITTI benchmarks demonstrate the effectiveness of the proposed method, showing clear improvements over the baselines. More importantly, ProtoOcc achieves competitive performance against the baselines even with 75\% reduced voxel resolution.

View on arXiv
@article{oh2025_2503.15185,
  title={ 3D Occupancy Prediction with Low-Resolution Queries via Prototype-aware View Transformation },
  author={ Gyeongrok Oh and Sungjune Kim and Heeju Ko and Hyung-gun Chi and Jinkyu Kim and Dongwook Lee and Daehyun Ji and Sungjoon Choi and Sujin Jang and Sangpil Kim },
  journal={arXiv preprint arXiv:2503.15185},
  year={ 2025 }
}
Comments on this paper