ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.14820
26
0

Accelerating Visual Reinforcement Learning with Separate Primitive Policy for Peg-in-Hole Tasks

21 April 2025
Zichun Xu
Zhaomin Wang
Yuntao Li
Lei Zhuang
Zhiyuan Zhao
Guocai Yang
Jingdong Zhao
ArXivPDFHTML
Abstract

For peg-in-hole tasks, humans rely on binocular visual perception to locate the peg above the hole surface and then proceed with insertion. This paper draws insights from this behavior to enable agents to learn efficient assembly strategies through visual reinforcement learning. Hence, we propose a Separate Primitive Policy (S2P) to simultaneously learn how to derive location and insertion actions. S2P is compatible with model-free reinforcement learning algorithms. Ten insertion tasks featuring different polygons are developed as benchmarks for evaluations. Simulation experiments show that S2P can boost the sample efficiency and success rate even with force constraints. Real-world experiments are also performed to verify the feasibility of S2P. Ablations are finally given to discuss the generalizability of S2P and some factors that affect its performance.

View on arXiv
@article{xu2025_2504.14820,
  title={ Accelerating Visual Reinforcement Learning with Separate Primitive Policy for Peg-in-Hole Tasks },
  author={ Zichun Xu and Zhaomin Wang and Yuntao Li and Lei Zhuang and Zhiyuan Zhao and Guocai Yang and Jingdong Zhao },
  journal={arXiv preprint arXiv:2504.14820},
  year={ 2025 }
}
Comments on this paper