39
0

Learning to Trust Bellman Updates: Selective State-Adaptive Regularization for Offline RL

Main:8 Pages
4 Figures
Bibliography:4 Pages
6 Tables
Appendix:6 Pages
Abstract

Offline reinforcement learning (RL) aims to learn an effective policy from a static dataset. To alleviate extrapolation errors, existing studies often uniformly regularize the value function or policy updates across all states. However, due to substantial variations in data quality, the fixed regularization strength often leads to a dilemma: Weak regularization strength fails to address extrapolation errors and value overestimation, while strong regularization strength shifts policy learning toward behavior cloning, impeding potential performance enabled by Bellman updates. To address this issue, we propose the selective state-adaptive regularization method for offline RL. Specifically, we introduce state-adaptive regularization coefficients to trust state-level Bellman-driven results, while selectively applying regularization on high-quality actions, aiming to avoid performance degradation caused by tight constraints on low-quality actions. By establishing a connection between the representative value regularization method, CQL, and explicit policy constraint methods, we effectively extend selective state-adaptive regularization to these two mainstream offline RL approaches. Extensive experiments demonstrate that the proposed method significantly outperforms the state-of-the-art approaches in both offline and offline-to-online settings on the D4RL benchmark.

View on arXiv
@article{luo2025_2505.19923,
  title={ Learning to Trust Bellman Updates: Selective State-Adaptive Regularization for Offline RL },
  author={ Qin-Wen Luo and Ming-Kun Xie and Ye-Wen Wang and Sheng-Jun Huang },
  journal={arXiv preprint arXiv:2505.19923},
  year={ 2025 }
}
Comments on this paper