ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.20951
38
0

DSOcc: Leveraging Depth Awareness and Semantic Aid to Boost Camera-Based 3D Semantic Occupancy Prediction

27 May 2025
N. Fang
Zheyuan Zhou
Kang Wang
Ruibo Li
Lemiao Qiu
Shuyou Zhang
Zhe Wang
Guosheng Lin
ArXiv (abs)PDFHTML
Main:10 Pages
9 Figures
Bibliography:3 Pages
14 Tables
Abstract

Camera-based 3D semantic occupancy prediction offers an efficient and cost-effective solution for perceiving surrounding scenes in autonomous driving. However, existing works rely on explicit occupancy state inference, leading to numerous incorrect feature assignments, and insufficient samples restrict the learning of occupancy class inference. To address these challenges, we propose leveraging Depth awareness and Semantic aid to boost camera-based 3D semantic Occupancy prediction (DSOcc). We jointly perform occupancy state and occupancy class inference, where soft occupancy confidence is calculated through non-learning method and multiplied with image features to make the voxel representation aware of depth, enabling adaptive implicit occupancy state inference. Rather than focusing on improving feature learning, we directly utilize well-trained image semantic segmentation and fuse multiple frames with their occupancy probabilities to aid occupancy class inference, thereby enhancing robustness. Experimental results demonstrate that DSOcc achieves state-of-the-art performance on the SemanticKITTI dataset among camera-based methods.

View on arXiv
@article{fang2025_2505.20951,
  title={ DSOcc: Leveraging Depth Awareness and Semantic Aid to Boost Camera-Based 3D Semantic Occupancy Prediction },
  author={ Naiyu Fang and Zheyuan Zhou and Kang Wang and Ruibo Li and Lemiao Qiu and Shuyou Zhang and Zhe Wang and Guosheng Lin },
  journal={arXiv preprint arXiv:2505.20951},
  year={ 2025 }
}
Comments on this paper