SAMG: Offline-to-Online Reinforcement Learning via State-Action-Conditional Offline Model Guidance
- OffRLOnRL

Offline-to-online (O2O) reinforcement learning (RL) pre-trains models on offline data and refines policies through online fine-tuning. However, existing O2O RL algorithms typically require maintaining the tedious offline datasets to mitigate the effects of out-of-distribution (OOD) data, which significantly limits their efficiency in exploiting online samples. To address this deficiency, we introduce a new paradigm for O2O RL called State-Action-Conditional Offline \Model Guidance (SAMG). It freezes the pre-trained offline critic to provide compact offline understanding for each state-action sample, thus eliminating the need for retraining on offline data. The frozen offline critic is incorporated with the online target critic weighted by a state-action-adaptive coefficient. This coefficient aims to capture the offline degree of samples at the state-action level, and is updated adaptively during training. In practice, SAMG could be easily integrated with Q-function-based algorithms. Theoretical analysis shows good optimality and lower estimation error. Empirically, SAMG outperforms state-of-the-art O2O RL algorithms on the D4RL benchmark.
View on arXiv@article{zhang2025_2410.18626, title={ SAMG: Offline-to-Online Reinforcement Learning via State-Action-Conditional Offline Model Guidance }, author={ Liyu Zhang and Haochi Wu and Xu Wan and Quan Kong and Ruilong Deng and Mingyang Sun }, journal={arXiv preprint arXiv:2410.18626}, year={ 2025 } }