62
0

DiffVLA: Vision-Language Guided Diffusion Planning for Autonomous Driving

Abstract

Research interest in end-to-end autonomous driving has surged owing to its fully differentiable design integrating modular tasks, i.e. perception, prediction and planing, which enables optimization in pursuit of the ultimate goal. Despite the great potential of the end-to-end paradigm, existing methods suffer from several aspects including expensive BEV (bird's eye view) computation, action diversity, and sub-optimal decision in complex real-world scenarios. To address these challenges, we propose a novel hybrid sparse-dense diffusion policy, empowered by a Vision-Language Model (VLM), called Diff-VLA. We explore the sparse diffusion representation for efficient multi-modal driving behavior. Moreover, we rethink the effectiveness of VLM driving decision and improve the trajectory generation guidance through deep interaction across agent, map instances and VLM output. Our method shows superior performance in Autonomous Grand Challenge 2025 which contains challenging real and reactive synthetic scenarios. Our methods achieves 45.0 PDMS.

View on arXiv
@article{jiang2025_2505.19381,
  title={ DiffVLA: Vision-Language Guided Diffusion Planning for Autonomous Driving },
  author={ Anqing Jiang and Yu Gao and Zhigang Sun and Yiru Wang and Jijun Wang and Jinghao Chai and Qian Cao and Yuweng Heng and Hao Jiang and Yunda Dong and Zongzheng Zhang and Xianda Guo and Hao Sun and Hao Zhao },
  journal={arXiv preprint arXiv:2505.19381},
  year={ 2025 }
}
Comments on this paper