17
0

X-Scene: Large-Scale Driving Scene Generation with High Fidelity and Flexible Controllability

Main:23 Pages
11 Figures
Bibliography:5 Pages
9 Tables
Abstract

Diffusion models are advancing autonomous driving by enabling realistic data synthesis, predictive end-to-end planning, and closed-loop simulation, with a primary focus on temporally consistent generation. However, the generation of large-scale 3D scenes that require spatial coherence remains underexplored. In this paper, we propose X-Scene, a novel framework for large-scale driving scene generation that achieves both geometric intricacy and appearance fidelity, while offering flexible controllability. Specifically, X-Scene supports multi-granular control, including low-level conditions such as user-provided or text-driven layout for detailed scene composition and high-level semantic guidance such as user-intent and LLM-enriched text prompts for efficient customization. To enhance geometrical and visual fidelity, we introduce a unified pipeline that sequentially generates 3D semantic occupancy and the corresponding multiview images, while ensuring alignment between modalities. Additionally, we extend the generated local region into a large-scale scene through consistency-aware scene outpainting, which extrapolates new occupancy and images conditioned on the previously generated area, enhancing spatial continuity and preserving visual coherence. The resulting scenes are lifted into high-quality 3DGS representations, supporting diverse applications such as scene exploration. Comprehensive experiments demonstrate that X-Scene significantly advances controllability and fidelity for large-scale driving scene generation, empowering data generation and simulation for autonomous driving.

View on arXiv
@article{yang2025_2506.13558,
  title={ X-Scene: Large-Scale Driving Scene Generation with High Fidelity and Flexible Controllability },
  author={ Yu Yang and Alan Liang and Jianbiao Mei and Yukai Ma and Yong Liu and Gim Hee Lee },
  journal={arXiv preprint arXiv:2506.13558},
  year={ 2025 }
}
Comments on this paper