20
0

Orthogonalized Estimation of Difference of QQ-functions

Angela Zhou
Abstract

Offline reinforcement learning is important in many settings with available observational data but the inability to deploy new policies online due to safety, cost, and other concerns. Many recent advances in causal inference and machine learning target estimation of causal contrast functions such as CATE, which is sufficient for optimizing decisions and can adapt to potentially smoother structure. We develop a dynamic generalization of the R-learner (Nie and Wager 2021, Lewis and Syrgkanis 2021) for estimating and optimizing the difference of QπQ^\pi-functions, Qπ(s,1)Qπ(s,0)Q^\pi(s,1)-Q^\pi(s,0) (which can be used to optimize multiple-valued actions). We leverage orthogonal estimation to improve convergence rates in the presence of slower nuisance estimation rates and prove consistency of policy optimization under a margin condition. The method can leverage black-box nuisance estimators of the QQ-function and behavior policy to target estimation of a more structured QQ-function contrast.

View on arXiv
Comments on this paper