ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.06385
29
28

Sample-Efficient Reinforcement Learning with loglog(T) Switching Cost

13 February 2022
Dan Qiao
Ming Yin
Ming Min
Yu-Xiang Wang
ArXivPDFHTML
Abstract

We study the problem of reinforcement learning (RL) with low (policy) switching cost - a problem well-motivated by real-life RL applications in which deployments of new policies are costly and the number of policy updates must be low. In this paper, we propose a new algorithm based on stage-wise exploration and adaptive policy elimination that achieves a regret of O~(H4S2AT)\widetilde{O}(\sqrt{H^4S^2AT})O(H4S2AT​) while requiring a switching cost of O(HSAlog⁡log⁡T)O(HSA \log\log T)O(HSAloglogT). This is an exponential improvement over the best-known switching cost O(H2SAlog⁡T)O(H^2SA\log T)O(H2SAlogT) among existing methods with O~(poly(H,S,A)T)\widetilde{O}(\mathrm{poly}(H,S,A)\sqrt{T})O(poly(H,S,A)T​) regret. In the above, S,AS,AS,A denotes the number of states and actions in an HHH-horizon episodic Markov Decision Process model with unknown transitions, and TTT is the number of steps. As a byproduct of our new techniques, we also derive a reward-free exploration algorithm with a switching cost of O(HSA)O(HSA)O(HSA). Furthermore, we prove a pair of information-theoretical lower bounds which say that (1) Any no-regret algorithm must have a switching cost of Ω(HSA)\Omega(HSA)Ω(HSA); (2) Any O~(T)\widetilde{O}(\sqrt{T})O(T​) regret algorithm must incur a switching cost of Ω(HSAlog⁡log⁡T)\Omega(HSA\log\log T)Ω(HSAloglogT). Both our algorithms are thus optimal in their switching costs.

View on arXiv
Comments on this paper