41
3

Diffusion Trajectory-guided Policy for Long-horizon Robot Manipulation

Abstract

Recently, Vision-Language-Action models (VLA) have advanced robot imitation learning, but high data collection costs and limited demonstrations hinder generalization and current imitation learning methods struggle in out-of-distribution scenarios, especially for long-horizon tasks. A key challenge is how to mitigate compounding errors in imitation learning, which lead to cascading failures over extended trajectories. To address these challenges, we propose the Diffusion Trajectory-guided Policy (DTP) framework, which generates 2D trajectories through a diffusion model to guide policy learning for long-horizon tasks. By leveraging task-relevant trajectories, DTP provides trajectory-level guidance to reduce error accumulation. Our two-stage approach first trains a generative vision-language model to create diffusion-based trajectories, then refines the imitation policy using them. Experiments on the CALVIN benchmark show that DTP outperforms state-of-the-art baselines by 25% in success rate, starting from scratch without external pretraining. Moreover, DTP significantly improves real-world robot performance.

View on arXiv
@article{fan2025_2502.10040,
  title={ Diffusion Trajectory-guided Policy for Long-horizon Robot Manipulation },
  author={ Shichao Fan and Quantao Yang and Yajie Liu and Kun Wu and Zhengping Che and Qingjie Liu and Min Wan },
  journal={arXiv preprint arXiv:2502.10040},
  year={ 2025 }
}
Comments on this paper