ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2003.04069
11
19

Zooming for Efficient Model-Free Reinforcement Learning in Metric Spaces

9 March 2020
Ahmed Touati
Adrien Ali Taïga
Marc G. Bellemare
ArXivPDFHTML
Abstract

Despite the wealth of research into provably efficient reinforcement learning algorithms, most works focus on tabular representation and thus struggle to handle exponentially or infinitely large state-action spaces. In this paper, we consider episodic reinforcement learning with a continuous state-action space which is assumed to be equipped with a natural metric that characterizes the proximity between different states and actions. We propose ZoomRL, an online algorithm that leverages ideas from continuous bandits to learn an adaptive discretization of the joint space by zooming in more promising and frequently visited regions while carefully balancing the exploitation-exploration trade-off. We show that ZoomRL achieves a worst-case regret O~(H52Kd+1d+2)\tilde{O}(H^{\frac{5}{2}} K^{\frac{d+1}{d+2}})O~(H25​Kd+2d+1​) where HHH is the planning horizon, KKK is the number of episodes and ddd is the covering dimension of the space with respect to the metric. Moreover, our algorithm enjoys improved metric-dependent guarantees that reflect the geometry of the underlying space. Finally, we show that our algorithm is robust to small misspecification errors.

View on arXiv
Comments on this paper