15
0

Language-Vision Planner and Executor for Text-to-Visual Reasoning

Main:8 Pages
32 Figures
Bibliography:5 Pages
6 Tables
Appendix:11 Pages
Abstract

The advancement in large language models (LLMs) and large vision models has fueled the rapid progress in multi-modal visual-text reasoning capabilities. However, existing vision-language models (VLMs) to date suffer from generalization performance. Inspired by recent development in LLMs for visual reasoning, this paper presents VLAgent, an AI system that can create a step-by-step visual reasoning plan with an easy-to-understand script and execute each step of the plan in real time by integrating planning script with execution verifications via an automated process supported by VLAgent. In the task planning phase, VLAgent fine-tunes an LLM through in-context learning to generate a step-by-step planner for each user-submitted text-visual reasoning task. During the plan execution phase, VLAgent progressively refines the composition of neuro-symbolic executable modules to generate high-confidence reasoning results. VLAgent has three unique design characteristics: First, we improve the quality of plan generation through in-context learning, improving logic reasoning by reducing erroneous logic steps, incorrect programs, and LLM hallucinations. Second, we design a syntax-semantics parser to identify and correct additional logic errors of the LLM-generated planning script prior to launching the plan executor. Finally, we employ the ensemble method to improve the generalization performance of our step-executor. Extensive experiments with four visual reasoning benchmarks (GQA, MME, NLVR2, VQAv2) show that VLAgent achieves significant performance enhancement for multimodal text-visual reasoning applications, compared to the exiting representative VLMs and LLM based visual composition approaches like ViperGPT and VisProg, thanks to the novel optimization modules of VLAgent back-engine (SS-Parser, Plan Repairer, Output Verifiers). Code and data will be made available upon paper acceptance.

View on arXiv
@article{xu2025_2506.07778,
  title={ Language-Vision Planner and Executor for Text-to-Visual Reasoning },
  author={ Yichang Xu and Gaowen Liu and Ramana Rao Kompella and Sihao Hu and Tiansheng Huang and Fatih Ilhan and Selim Furkan Tekin and Zachary Yahn and Ling Liu },
  journal={arXiv preprint arXiv:2506.07778},
  year={ 2025 }
}
Comments on this paper