PLATO: Policy Learning using Adaptive Trajectory Optimization

Policy search can in principle acquire complex strategies for control of robots, self-driving vehicles, and other autonomous systems. When the policy is trained to process raw sensory inputs, such as images and depth maps, it can acquire a strategy that combines perception and control. However, effectively processing such complex inputs requires an expressive policy class, such as a large neural network. These high-dimensional policies are difficult to train, especially when training must be done for safety-critical systems. We propose PLATO, an algorithm that trains complex control policies with supervised learning, using model-predictive control (MPC) to generate the supervision. PLATO uses an adaptive training method to modify the behavior of MPC to gradually match the learned policy, in order to generate training samples at states that are likely to be visited by the policy while avoiding highly undesirable on-policy actions. We prove that this type of adaptive MPC expert produces supervision that leads to good long-horizon performance of the resulting policy, and empirically demonstrate that MPC can still avoid dangerous on-policy actions in unexpected situations during training. Compared to prior methods, our empirical results demonstrate that PLATO learns faster and often converges to a better solution on a set of challenging simulated experiments involving autonomous aerial vehicles.
View on arXiv