59

UniDriveDreamer: A Single-Stage Multimodal World Model for Autonomous Driving

Guosheng Zhao
Yaozeng Wang
Xiaofeng Wang
Zheng Zhu
Tingdong Yu
Guan Huang
Yongchen Zai
Ji Jiao
Changliang Xue
Xiaole Wang
Zhen Yang
Futang Zhu
Xingang Wang
Main:12 Pages
8 Figures
Bibliography:4 Pages
4 Tables
Abstract

World models have demonstrated significant promise for data synthesis in autonomous driving. However, existing methods predominantly concentrate on single-modality generation, typically focusing on either multi-camera video or LiDAR sequence synthesis. In this paper, we propose UniDriveDreamer, a single-stage unified multimodal world model for autonomous driving, which directly generates multimodal future observations without relying on intermediate representations or cascaded modules. Our framework introduces a LiDAR-specific variational autoencoder (VAE) designed to encode input LiDAR sequences, alongside a video VAE for multi-camera images. To ensure cross-modal compatibility and training stability, we propose Unified Latent Anchoring (ULA), which explicitly aligns the latent distributions of the two modalities. The aligned features are fused and processed by a diffusion transformer that jointly models their geometric correspondence and temporal evolution. Additionally, structured scene layout information is projected per modality as a conditioning signal to guide the synthesis. Extensive experiments demonstrate that UniDriveDreamer outperforms previous state-of-the-art methods in both video and LiDAR generation, while also yielding measurable improvements in downstream

View on arXiv
Comments on this paper