ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.08359
47
4

Horizon-free Reinforcement Learning in Adversarial Linear Mixture MDPs

15 May 2023
Kaixuan Ji
Qingyue Zhao
Jiafan He
Weitong Zhang
Q. Gu
ArXivPDFHTML
Abstract

Recent studies have shown that episodic reinforcement learning (RL) is no harder than bandits when the total reward is bounded by 111, and proved regret bounds that have a polylogarithmic dependence on the planning horizon HHH. However, it remains an open question that if such results can be carried over to adversarial RL, where the reward is adversarially chosen at each episode. In this paper, we answer this question affirmatively by proposing the first horizon-free policy search algorithm. To tackle the challenges caused by exploration and adversarially chosen reward, our algorithm employs (1) a variance-uncertainty-aware weighted least square estimator for the transition kernel; and (2) an occupancy measure-based technique for the online search of a \emph{stochastic} policy. We show that our algorithm achieves an O~((d+log⁡(∣S∣2∣A∣))K)\tilde{O}\big((d+\log (|\mathcal{S}|^2 |\mathcal{A}|))\sqrt{K}\big)O~((d+log(∣S∣2∣A∣))K​) regret with full-information feedback, where ddd is the dimension of a known feature mapping linearly parametrizing the unknown transition kernel of the MDP, KKK is the number of episodes, ∣S∣|\mathcal{S}|∣S∣ and ∣A∣|\mathcal{A}|∣A∣ are the cardinalities of the state and action spaces. We also provide hardness results and regret lower bounds to justify the near optimality of our algorithm and the unavoidability of log⁡∣S∣\log|\mathcal{S}|log∣S∣ and log⁡∣A∣\log|\mathcal{A}|log∣A∣ in the regret bound.

View on arXiv
Comments on this paper