10
0

Zero-Shot Scene Understanding with Multimodal Large Language Models for Automated Vehicles

Main:7 Pages
3 Figures
Abstract

Scene understanding is critical for various downstream tasks in autonomous driving, including facilitating driver-agent communication and enhancing human-centered explainability of autonomous vehicle (AV) decisions. This paper evaluates the capability of four multimodal large language models (MLLMs), including relatively small models, to understand scenes in a zero-shot, in-context learning setting. Additionally, we explore whether combining these models using an ensemble approach with majority voting can enhance scene understanding performance. Our experiments demonstrate that GPT-4o, the largest model, outperforms the others in scene understanding. However, the performance gap between GPT-4o and the smaller models is relatively modest, suggesting that advanced techniques such as improved in-context learning, retrieval-augmented generation (RAG), or fine-tuning could further optimize the smaller models' performance. We also observe mixed results with the ensemble approach: while some scene attributes show improvement in performance metrics such as F1-score, others experience a decline. These findings highlight the need for more sophisticated ensemble techniques to achieve consistent gains across all scene attributes. This study underscores the potential of leveraging MLLMs for scene understanding and provides insights into optimizing their performance for autonomous driving applications.

View on arXiv
@article{elhenawy2025_2506.12232,
  title={ Zero-Shot Scene Understanding with Multimodal Large Language Models for Automated Vehicles },
  author={ Mohammed Elhenawy and Shadi Jaradat and Taqwa I. Alhadidi and Huthaifa I. Ashqar and Ahmed Jaber and Andry Rakotonirainy and Mohammad Abu Tami },
  journal={arXiv preprint arXiv:2506.12232},
  year={ 2025 }
}
Comments on this paper