2
0

Bi-Level Policy Optimization with Nyström Hypergradients

Abstract

The dependency of the actor on the critic in actor-critic (AC) reinforcement learning means that AC can be characterized as a bilevel optimization (BLO) problem, also called a Stackelberg game. This characterization motivates two modifications to vanilla AC algorithms. First, the critic's update should be nested to learn a best response to the actor's policy. Second, the actor should update according to a hypergradient that takes changes in the critic's behavior into account. Computing this hypergradient involves finding an inverse Hessian vector product, a process that can be numerically unstable. We thus propose a new algorithm, Bilevel Policy Optimization with Nyström Hypergradients (BLPO), which uses nesting to account for the nested structure of BLO, and leverages the Nyström method to compute the hypergradient. Theoretically, we prove BLPO converges to (a point that satisfies the necessary conditions for) a local strong Stackelberg equilibrium in polynomial time with high probability, assuming a linear parametrization of the critic's objective. Empirically, we demonstrate that BLPO performs on par with or better than PPO on a variety of discrete and continuous control tasks.

View on arXiv
@article{prakash2025_2505.11714,
  title={ Bi-Level Policy Optimization with Nyström Hypergradients },
  author={ Arjun Prakash and Naicheng He and Denizalp Goktas and Amy Greenwald },
  journal={arXiv preprint arXiv:2505.11714},
  year={ 2025 }
}
Comments on this paper