ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1901.06576
17
4

Towards Physically Safe Reinforcement Learning under Supervision

19 January 2019
Yinan Zhang
Devin J. Balkcom
Haoxiang Li
    OffRL
ArXiv (abs)PDFHTML
Abstract

This paper addresses the question of how a previously available control policy πs\pi_sπs​ can be used as a supervisor to more quickly and safely train a new learned control policy πL\pi_LπL​ for a robot. A weighted average of the supervisor and learned policies is used during trials, with a heavier weight initially on the supervisor, in order to allow safe and useful physical trials while the learned policy is still ineffective. During the process, the weight is adjusted to favor the learned policy. As weights are adjusted, the learned network must compensate so as to give safe and reasonable outputs under the different weights. A pioneer network is introduced that pre-learns a policy that performs similarly to the current learned policy under the planned next step for new weights; this pioneer network then replaces the currently learned network in the next set of trials. Experiments in OpenAI Gym demonstrate the effectiveness of the proposed method.

View on arXiv
Comments on this paper