ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.07309
97
2

Semi-Supervised Vision-Centric 3D Occupancy World Model for Autonomous Driving

11 February 2025
Xiang Li
Pengfei Li
Yupeng Zheng
Wei Sun
Yan Wang
Yilun Chen
    3DPC
ArXivPDFHTML
Abstract

Understanding world dynamics is crucial for planning in autonomous driving. Recent methods attempt to achieve this by learning a 3D occupancy world model that forecasts future surrounding scenes based on current observation. However, 3D occupancy labels are still required to produce promising results. Considering the high annotation cost for 3D outdoor scenes, we propose a semi-supervised vision-centric 3D occupancy world model, PreWorld, to leverage the potential of 2D labels through a novel two-stage training paradigm: the self-supervised pre-training stage and the fully-supervised fine-tuning stage. Specifically, during the pre-training stage, we utilize an attribute projection head to generate different attribute fields of a scene (e.g., RGB, density, semantic), thus enabling temporal supervision from 2D labels via volume rendering techniques. Furthermore, we introduce a simple yet effective state-conditioned forecasting module to recursively forecast future occupancy and ego trajectory in a direct manner. Extensive experiments on the nuScenes dataset validate the effectiveness and scalability of our method, and demonstrate that PreWorld achieves competitive performance across 3D occupancy prediction, 4D occupancy forecasting and motion planning tasks.

View on arXiv
@article{li2025_2502.07309,
  title={ Semi-Supervised Vision-Centric 3D Occupancy World Model for Autonomous Driving },
  author={ Xiang Li and Pengfei Li and Yupeng Zheng and Wei Sun and Yan Wang and Yilun Chen },
  journal={arXiv preprint arXiv:2502.07309},
  year={ 2025 }
}
Comments on this paper