ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.03270
20
0

Grounded Vision-Language Interpreter for Integrated Task and Motion Planning

3 June 2025
Jeremy Siburian
Keisuke Shirai
C. C. Beltran-Hernandez
Masashi Hamaya
Michael Görner
Atsushi Hashimoto
ArXiv (abs)PDFHTML
Main:8 Pages
10 Figures
Bibliography:5 Pages
5 Tables
Appendix:11 Pages
Abstract

While recent advances in vision-language models (VLMs) have accelerated the development of language-guided robot planners, their black-box nature often lacks safety guarantees and interpretability crucial for real-world deployment. Conversely, classical symbolic planners offer rigorous safety verification but require significant expert knowledge for setup. To bridge the current gap, this paper proposes ViLaIn-TAMP, a hybrid planning framework for enabling verifiable, interpretable, and autonomous robot behaviors. ViLaIn-TAMP comprises three main components: (1) ViLaIn (Vision-Language Interpreter) - A prior framework that converts multimodal inputs into structured problem specifications using off-the-shelf VLMs without additional domain-specific training, (2) a modular Task and Motion Planning (TAMP) system that grounds these specifications in actionable trajectory sequences through symbolic and geometric constraint reasoning and can utilize learning-based skills for key manipulation phases, and (3) a corrective planning module which receives concrete feedback on failed solution attempts from the motion and task planning components and can feed adapted logic and geometric feasibility constraints back to ViLaIn to improve and further refine the specification. We evaluate our framework on several challenging manipulation tasks in a cooking domain. We demonstrate that the proposed closed-loop corrective architecture exhibits a more than 30% higher mean success rate for ViLaIn-TAMP compared to without corrective planning.

View on arXiv
@article{siburian2025_2506.03270,
  title={ Grounded Vision-Language Interpreter for Integrated Task and Motion Planning },
  author={ Jeremy Siburian and Keisuke Shirai and Cristian C. Beltran-Hernandez and Masashi Hamaya and Michael Görner and Atsushi Hashimoto },
  journal={arXiv preprint arXiv:2506.03270},
  year={ 2025 }
}
Comments on this paper