ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.12211
3
0

Imagination-Limited Q-Learning for Offline Reinforcement Learning

18 May 2025
Wenhui Liu
Zhijian Wu
Jingchao Wang
Dingjiang Huang
Shuigeng Zhou
    OffRL
ArXivPDFHTML
Abstract

Offline reinforcement learning seeks to derive improved policies entirely from historical data but often struggles with over-optimistic value estimates for out-of-distribution (OOD) actions. This issue is typically mitigated via policy constraint or conservative value regularization methods. However, these approaches may impose overly constraints or biased value estimates, potentially limiting performance improvements. To balance exploitation and restriction, we propose an Imagination-Limited Q-learning (ILQ) method, which aims to maintain the optimism that OOD actions deserve within appropriate limits. Specifically, we utilize the dynamics model to imagine OOD action-values, and then clip the imagined values with the maximum behavior values. Such design maintains reasonable evaluation of OOD actions to the furthest extent, while avoiding its over-optimism. Theoretically, we prove the convergence of the proposed ILQ under tabular Markov decision processes. Particularly, we demonstrate that the error bound between estimated values and optimality values of OOD state-actions possesses the same magnitude as that of in-distribution ones, thereby indicating that the bias in value estimates is effectively mitigated. Empirically, our method achieves state-of-the-art performance on a wide range of tasks in the D4RL benchmark.

View on arXiv
@article{liu2025_2505.12211,
  title={ Imagination-Limited Q-Learning for Offline Reinforcement Learning },
  author={ Wenhui Liu and Zhijian Wu and Jingchao Wang and Dingjiang Huang and Shuigeng Zhou },
  journal={arXiv preprint arXiv:2505.12211},
  year={ 2025 }
}
Comments on this paper