27

Causal World Modeling for Robot Control

Lin Li
Qihang Zhang
Yiming Luo
Shuai Yang
Ruilin Wang
Fei Han
Mingrui Yu
Zelin Gao
Nan Xue
Xing Zhu
Yujun Shen
Yinghao Xu
Main:17 Pages
10 Figures
Bibliography:6 Pages
10 Tables
Appendix:8 Pages
Abstract

This work highlights that video world modeling, alongside vision-language pre-training, establishes a fresh and independent foundation for robot learning. Intuitively, video world models provide the ability to imagine the near future by understanding the causality between actions and visual dynamics. Inspired by this, we introduce LingBot-VA, an autoregressive diffusion framework that learns frame prediction and policy execution simultaneously. Our model features three carefully crafted designs: (1) a shared latent space, integrating vision and action tokens, driven by a Mixture-of-Transformers (MoT) architecture, (2) a closed-loop rollout mechanism, allowing for ongoing acquisition of environmental feedback with ground-truth observations, (3) an asynchronous inference pipeline, parallelizing action prediction and motor execution to support efficient control. We evaluate our model on both simulation benchmarks and real-world scenarios, where it shows significant promise in long-horizon manipulation, data efficiency in post-training, and strong generalizability to novel configurations. The code and model are made publicly available to facilitate the community.

View on arXiv
Comments on this paper