ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.20024
7
0

ReasonPlan: Unified Scene Prediction and Decision Reasoning for Closed-loop Autonomous Driving

26 May 2025
Xueyi Liu
Zuodong Zhong
Yuxin Guo
Yun-Fu Liu
Zhiguo Su
Qichao Zhang
Junli Wang
Yinfeng Gao
Yupeng Zheng
Qiao Lin
Huiyong Chen
Dongbin Zhao
    LRM
ArXivPDFHTML
Abstract

Due to the powerful vision-language reasoning and generalization abilities, multimodal large language models (MLLMs) have garnered significant attention in the field of end-to-end (E2E) autonomous driving. However, their application to closed-loop systems remains underexplored, and current MLLM-based methods have not shown clear superiority to mainstream E2E imitation learning approaches. In this work, we propose ReasonPlan, a novel MLLM fine-tuning framework designed for closed-loop driving through holistic reasoning with a self-supervised Next Scene Prediction task and supervised Decision Chain-of-Thought process. This dual mechanism encourages the model to align visual representations with actionable driving context, while promoting interpretable and causally grounded decision making. We curate a planning-oriented decision reasoning dataset, namely PDR, comprising 210k diverse and high-quality samples. Our method outperforms the mainstream E2E imitation learning method by a large margin of 19% L2 and 16.1 driving score on Bench2Drive benchmark. Furthermore, ReasonPlan demonstrates strong zero-shot generalization on unseen DOS benchmark, highlighting its adaptability in handling zero-shot corner cases. Code and dataset will be found inthis https URL.

View on arXiv
@article{liu2025_2505.20024,
  title={ ReasonPlan: Unified Scene Prediction and Decision Reasoning for Closed-loop Autonomous Driving },
  author={ Xueyi Liu and Zuodong Zhong and Yuxin Guo and Yun-Fu Liu and Zhiguo Su and Qichao Zhang and Junli Wang and Yinfeng Gao and Yupeng Zheng and Qiao Lin and Huiyong Chen and Dongbin Zhao },
  journal={arXiv preprint arXiv:2505.20024},
  year={ 2025 }
}
Comments on this paper