A Hierarchical Test Platform for Vision Language Model (VLM)-Integrated Real-World Autonomous Driving
- VLM

Vision-Language Models (VLMs) have demonstrated notable promise in autonomous driving by offering the potential for multimodal reasoning through pretraining on extensive image-text pairs. However, adapting these models from broad web-scale data to the safety-critical context of driving presents a significant challenge, commonly referred to as domain shift. Existing simulation-based and dataset-driven evaluation methods, although valuable, often fail to capture the full complexity of real-world scenarios and cannot easily accommodate repeatable closed-loop testing with flexible scenario manipulation. In this paper, we introduce a hierarchical real-world test platform specifically designed to evaluate VLM-integrated autonomous driving systems. Our approach includes a modular, low-latency on-vehicle middleware that allows seamless incorporation of various VLMs, a clearly separated perception-planning-control architecture that can accommodate both VLM-based and conventional modules, and a configurable suite of real-world testing scenarios on a closed track that facilitates controlled yet authentic evaluations. We demonstrate the effectiveness of the proposed platform`s testing and evaluation ability with a case study involving a VLM-enabled autonomous vehicle, highlighting how our test framework supports robust experimentation under diverse conditions.
View on arXiv@article{zhou2025_2506.14100, title={ A Hierarchical Test Platform for Vision Language Model (VLM)-Integrated Real-World Autonomous Driving }, author={ Yupeng Zhou and Can Cui and Juntong Peng and Zichong Yang and Juanwu Lu and Jitesh H Panchal and Bin Yao and Ziran Wang }, journal={arXiv preprint arXiv:2506.14100}, year={ 2025 } }