ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.08463
22
0

How to Provably Improve Return Conditioned Supervised Learning?

10 June 2025
Zhishuai Liu
Yu Yang
Ruhan Wang
Pan Xu
Dongruo Zhou
    OffRL
ArXiv (abs)PDFHTML
Abstract

In sequential decision-making problems, Return-Conditioned Supervised Learning (RCSL) has gained increasing recognition for its simplicity and stability in modern decision-making tasks. Unlike traditional offline reinforcement learning (RL) algorithms, RCSL frames policy learning as a supervised learning problem by taking both the state and return as input. This approach eliminates the instability often associated with temporal difference (TD) learning in offline RL. However, RCSL has been criticized for lacking the stitching property, meaning its performance is inherently limited by the quality of the policy used to generate the offline dataset. To address this limitation, we propose a principled and simple framework called Reinforced RCSL. The key innovation of our framework is the introduction of a concept we call the in-distribution optimal return-to-go. This mechanism leverages our policy to identify the best achievable in-dataset future return based on the current state, avoiding the need for complex return augmentation techniques. Our theoretical analysis demonstrates that Reinforced RCSL can consistently outperform the standard RCSL approach. Empirical results further validate our claims, showing significant performance improvements across a range of benchmarks.

View on arXiv
@article{liu2025_2506.08463,
  title={ How to Provably Improve Return Conditioned Supervised Learning? },
  author={ Zhishuai Liu and Yu Yang and Ruhan Wang and Pan Xu and Dongruo Zhou },
  journal={arXiv preprint arXiv:2506.08463},
  year={ 2025 }
}
Main:12 Pages
4 Figures
Bibliography:4 Pages
12 Tables
Appendix:9 Pages
Comments on this paper