63
v1v2 (latest)

H-WM: Robotic Task and Motion Planning Guided by Hierarchical World Model

Jinbang Huang
Wenyuan Chen
Zhiyuan Li
Oscar Pang
Xiao Hu
Lingfeng Zhang
Yuanzhao Hu
Zhanguang Zhang
Mark Coates
Tongtong Cao
Xingyue Quan
Yingxue Zhang
Main:6 Pages
4 Figures
Bibliography:2 Pages
1 Tables
Abstract

World models are becoming central to robotic planning and control as they enable prediction of future state transitions. Existing approaches often emphasize video generation or natural-language prediction, which are difficult to ground in robot actions and suffer from compounding errors over long horizons. Classic task and motion planning models world transitions in logical space, enabling robot-executable and robust long-horizon reasoning. However, they typically operate independently of visual perception, preventing synchronized symbolic and visual state prediction. We propose a Hierarchical World Model (H-WM) that jointly predicts logical and visual state transitions within a unified framework. H-WM combines a high-level logical world model with a low-level visual world model, integrating the long-horizon robustness of symbolic reasoning with visual grounding. The hierarchical outputs provide stable intermediate guidance for long-horizon tasks, mitigating error accumulation and enabling robust execution across extended task sequences. Experiments across multiple vision-language-action (VLA) control policies demonstrate the effectiveness and generality of H-WM's guidance.

View on arXiv
Comments on this paper