There has been substantial recent progress on the theoretical understanding of model-free approaches to Linear Quadratic Regulator (LQR) problems. Much attention has been devoted to the special case when the goal is to drive the state close to a zero target. In this work, we consider the general case where the target is allowed to be arbitrary, which we refer to as the LQR tracking problem. We study the optimization landscape of this problem, and show that similar to the zero-target LQR problem, the LQR tracking problem also satisfies gradient dominance and local smoothness properties. This allows us to develop a zeroth-order policy gradient algorithm that achieves global convergence. We support our arguments with numerical simulations on a linear system.
View on arXiv