27

Provably Efficient Offline-to-Online Value Adaptation with General Function Approximation

Shangzhe Li
Weitong Zhang
Main:12 Pages
Bibliography:5 Pages
3 Tables
Appendix:27 Pages
Abstract

We study value adaptation in offline-to-online reinforcement learning under general function approximation. Starting from an imperfect offline pretrained QQ-function, the learner aims to adapt it to the target environment using only a limited amount of online interaction. We first characterize the difficulty of this setting by establishing a minimax lower bound, showing that even when the pretrained QQ-function is close to optimal QQ^\star, online adaptation can be no more efficient than pure online RL on certain hard instances. On the positive side, under a novel structural condition on the offline-pretrained value functions, we propose O2O-LSVI, an adaptation algorithm with problem-dependent sample complexity that provably improves over pure online RL. Finally, we complement our theory with neural-network experiments that demonstrate the practical effectiveness of the proposed method.

View on arXiv
Comments on this paper