ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.10851
24
2

Sample Complexity of the Linear Quadratic Regulator: A Reinforcement Learning Lens

16 April 2024
Amirreza Neshaei Moghaddam
A. Olshevsky
Bahman Gharesifard
ArXivPDFHTML
Abstract

We provide the first known algorithm that provably achieves ε\varepsilonε-optimality within O~(1/ε)\widetilde{\mathcal{O}}(1/\varepsilon)O(1/ε) function evaluations for the discounted discrete-time LQR problem with unknown parameters, without relying on two-point gradient estimates. These estimates are known to be unrealistic in many settings, as they depend on using the exact same initialization, which is to be selected randomly, for two different policies. Our results substantially improve upon the existing literature outside the realm of two-point gradient estimates, which either leads to O~(1/ε2)\widetilde{\mathcal{O}}(1/\varepsilon^2)O(1/ε2) rates or heavily relies on stability assumptions.

View on arXiv
Comments on this paper