ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.00420
66
0

Think Small, Act Big: Primitive Prompt Learning for Lifelong Robot Manipulation

1 April 2025
Yuanqi Yao
Siao Liu
Haoming Song
Delin Qu
Qizhi Chen
Yan Ding
Bin Zhao
Zhilin Wang
X. Li
Dong Wang
    CLL
ArXivPDFHTML
Abstract

Building a lifelong robot that can effectively leverage prior knowledge for continuous skill acquisition remains significantly challenging. Despite the success of experience replay and parameter-efficient methods in alleviating catastrophic forgetting problem, naively applying these methods causes a failure to leverage the shared primitives between skills. To tackle these issues, we propose Primitive Prompt Learning (PPL), to achieve lifelong robot manipulation via reusable and extensible primitives. Within our two stage learning scheme, we first learn a set of primitive prompts to represent shared primitives through multi-skills pre-training stage, where motion-aware prompts are learned to capture semantic and motion shared primitives across different skills. Secondly, when acquiring new skills in lifelong span, new prompts are appended and optimized with frozen pretrained prompts, boosting the learning via knowledge transfer from old skills to new ones. For evaluation, we construct a large-scale skill dataset and conduct extensive experiments in both simulation and real-world tasks, demonstrating PPL's superior performance over state-of-the-art methods.

View on arXiv
@article{yao2025_2504.00420,
  title={ Think Small, Act Big: Primitive Prompt Learning for Lifelong Robot Manipulation },
  author={ Yuanqi Yao and Siao Liu and Haoming Song and Delin Qu and Qizhi Chen and Yan Ding and Bin Zhao and Zhigang Wang and Xuelong Li and Dong Wang },
  journal={arXiv preprint arXiv:2504.00420},
  year={ 2025 }
}
Comments on this paper