An operator view of policy gradient methods

We cast policy gradient methods as the repeated application of two operators: a policy improvement operator , which maps any policy to a better one , and a projection operator , which finds the best approximation of in the set of realizable policies. We use this framework to introduce operator-based versions of traditional policy gradient methods such as REINFORCE and PPO, which leads to a better understanding of their original counterparts. We also use the understanding we develop of the role of and to propose a new global lower bound of the expected return. This new perspective allows us to further bridge the gap between policy-based and value-based methods, showing how REINFORCE and the Bellman optimality operator, for example, can be seen as two sides of the same coin.
View on arXiv