30
0

One Demo Is All It Takes: Planning Domain Derivation with LLMs from A Single Demonstration

Abstract

Pre-trained Large Language Models (LLMs) have shown promise in solving planning problems but often struggle to ensure plan correctness, especially for long-horizon tasks. Meanwhile, traditional robotic task and motion planning (TAMP) frameworks address these challenges more reliably by combining high-level symbolic search with low-level motion planning. At the core of TAMP is the planning domain, an abstract world representation defined through symbolic predicates and actions. However, creating these domains typically involves substantial manual effort and domain expertise, limiting generalizability. We introduce Planning Domain Derivation with LLMs (PDDLLM), a novel approach that combines simulated physical interaction with LLM reasoning to improve planning performance. The method reduces reliance on humans by inferring planning domains from a single annotated task-execution demonstration. Unlike prior domain-inference methods that rely on partially predefined or language descriptions of planning domains, PDDLLM constructs domains entirely from scratch and automatically integrates them with low-level motion planning skills, enabling fully automated long-horizon planning. PDDLLM is evaluated on over 1,200 diverse tasks spanning nine environments and benchmarked against six LLM-based planning baselines, demonstrating superior long-horizon planning performance, lower token costs, and successful deployment on multiple physical robot platforms.

View on arXiv
@article{huang2025_2505.18382,
  title={ One Demo Is All It Takes: Planning Domain Derivation with LLMs from A Single Demonstration },
  author={ Jinbang Huang and Yixin Xiao and Zhanguang Zhang and Mark Coates and Jianye Hao and Yingxue Zhang },
  journal={arXiv preprint arXiv:2505.18382},
  year={ 2025 }
}
Comments on this paper