Model-Free Synthesis via Adversarial Reinforcement Learning

Motivated by the recent empirical success of policy-based reinforcement learning (RL), there has been a research trend studying the performance of policy-based RL methods on standard control benchmark problems. In this paper, we examine the effectiveness of policy-based RL methods on an important robust control problem, namely synthesis. We build a connection between robust adversarial RL and synthesis, and develop a model-free version of the well-known -iteration for solving state-feedback synthesis with static -scaling. In the proposed algorithm, the step mimics the classical central path algorithm via incorporating a recently-developed double-loop adversarial RL method as a subroutine, and the step is based on model-free finite difference approximation. Extensive numerical study is also presented to demonstrate the utility of our proposed model-free algorithm. Our study sheds new light on the connections between adversarial RL and robust control.
View on arXiv