ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.20268
29
0

Outcome-Based Online Reinforcement Learning: Algorithms and Fundamental Limits

26 May 2025
Fan Chen
Zeyu Jia
Alexander Rakhlin
Tengyang Xie
    OffRL
ArXiv (abs)PDFHTML
Main:9 Pages
Bibliography:5 Pages
1 Tables
Appendix:19 Pages
Abstract

Reinforcement learning with outcome-based feedback faces a fundamental challenge: when rewards are only observed at trajectory endpoints, how do we assign credit to the right actions? This paper provides the first comprehensive analysis of this problem in online RL with general function approximation. We develop a provably sample-efficient algorithm achieving O~(CcovH3/ϵ2)\widetilde{O}({C_{\rm cov} H^3}/{\epsilon^2})O(Ccov​H3/ϵ2) sample complexity, where CcovC_{\rm cov}Ccov​ is the coverability coefficient of the underlying MDP. By leveraging general function approximation, our approach works effectively in large or infinite state spaces where tabular methods fail, requiring only that value functions and reward functions can be represented by appropriate function classes. Our results also characterize when outcome-based feedback is statistically separated from per-step rewards, revealing an unavoidable exponential separation for certain MDPs. For deterministic MDPs, we show how to eliminate the completeness assumption, dramatically simplifying the algorithm. We further extend our approach to preference-based feedback settings, proving that equivalent statistical efficiency can be achieved even under more limited information. Together, these results constitute a theoretical foundation for understanding the statistical properties of outcome-based reinforcement learning.

View on arXiv
@article{chen2025_2505.20268,
  title={ Outcome-Based Online Reinforcement Learning: Algorithms and Fundamental Limits },
  author={ Fan Chen and Zeyu Jia and Alexander Rakhlin and Tengyang Xie },
  journal={arXiv preprint arXiv:2505.20268},
  year={ 2025 }
}
Comments on this paper