ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2101.04882
67
76

Asymmetric self-play for automatic goal discovery in robotic manipulation

13 January 2021
OpenAI OpenAI
Matthias Plappert
Raul Sampedro
Tao Xu
Ilge Akkaya
V. Kosaraju
Peter Welinder
Ruben D'Sa
Arthur Petron
Henrique Pondé de Oliveira Pinto
Alex Paino
Hyeonwoo Noh
Lilian Weng
Qiming Yuan
Casey Chu
Wojciech Zaremba
    SSL
ArXivPDFHTML
Abstract

We train a single, goal-conditioned policy that can solve many robotic manipulation tasks, including tasks with previously unseen goals and objects. We rely on asymmetric self-play for goal discovery, where two agents, Alice and Bob, play a game. Alice is asked to propose challenging goals and Bob aims to solve them. We show that this method can discover highly diverse and complex goals without any human priors. Bob can be trained with only sparse rewards, because the interaction between Alice and Bob results in a natural curriculum and Bob can learn from Alice's trajectory when relabeled as a goal-conditioned demonstration. Finally, our method scales, resulting in a single policy that can generalize to many unseen tasks such as setting a table, stacking blocks, and solving simple puzzles. Videos of a learned policy is available at https://robotics-self-play.github.io.

View on arXiv
Comments on this paper