ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.13900
71
1

Optimistically Optimistic Exploration for Provably Efficient Infinite-Horizon Reinforcement and Imitation Learning

20 February 2025
Antoine Moulin
Gergely Neu
Luca Viano
ArXivPDFHTML
Abstract

We study the problem of reinforcement learning in infinite-horizon discounted linear Markov decision processes (MDPs), and propose the first computationally efficient algorithm achieving near-optimal regret guarantees in this setting. Our main idea is to combine two classic techniques for optimistic exploration: additive exploration bonuses applied to the reward function, and artificial transitions made to an absorbing state with maximal return. We show that, combined with a regularized approximate dynamic-programming scheme, the resulting algorithm achieves a regret of order O~(d3(1−γ)−7/2T)\tilde{\mathcal{O}} (\sqrt{d^3 (1 - \gamma)^{- 7 / 2} T})O~(d3(1−γ)−7/2T​), where TTT is the total number of sample transitions, γ∈(0,1)\gamma \in (0,1)γ∈(0,1) is the discount factor, and ddd is the feature dimensionality. The results continue to hold against adversarial reward sequences, enabling application of our method to the problem of imitation learning in linear MDPs, where we achieve state-of-the-art results.

View on arXiv
@article{moulin2025_2502.13900,
  title={ Optimistically Optimistic Exploration for Provably Efficient Infinite-Horizon Reinforcement and Imitation Learning },
  author={ Antoine Moulin and Gergely Neu and Luca Viano },
  journal={arXiv preprint arXiv:2502.13900},
  year={ 2025 }
}
Comments on this paper