LIVEJoin the current RTAI Connect sessionJoin now

324
1

Yes, Q-learning Helps Offline In-Context RL

Abstract

Existing offline in-context reinforcement learning (ICRL) methods have predominantly relied on supervised training objectives, which are known to have limitations in offline RL settings. In this study, we explore the integration of RL objectives within an offline ICRL framework. Through experiments on more than 150 GridWorld and MuJoCo environment-derived datasets, we demonstrate that optimizing RL objectives directly improves performance by approximately 30% on average compared to widely adopted Algorithm Distillation (AD), across various dataset coverages, structures, expertise levels, and environmental complexities. Furthermore, in the challenging XLand-MiniGrid environment, RL objectives doubled the performance of AD. Our results also reveal that the addition of conservatism during value learning brings additional improvements in almost all settings tested. Our findings emphasize the importance of aligning ICRL learning objectives with the RL reward-maximization goal, and demonstrate that offline RL is a promising direction for advancing ICRL.

View on arXiv
@article{tarasov2025_2502.17666,
  title={ Yes, Q-learning Helps Offline In-Context RL },
  author={ Denis Tarasov and Alexander Nikulin and Ilya Zisman and Albina Klepach and Andrei Polubarov and Nikita Lyubaykin and Alexander Derevyagin and Igor Kiselev and Vladislav Kurenkov },
  journal={arXiv preprint arXiv:2502.17666},
  year={ 2025 }
}
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.