ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2302.13087
19
0

Gauss-Newton Temporal Difference Learning with Nonlinear Function Approximation

25 February 2023
Zhifa Ke
Junyu Zhang
Zaiwen Wen
ArXivPDFHTML
Abstract

In this paper, a Gauss-Newton Temporal Difference (GNTD) learning method is proposed to solve the Q-learning problem with nonlinear function approximation. In each iteration, our method takes one Gauss-Newton (GN) step to optimize a variant of Mean-Squared Bellman Error (MSBE), where target networks are adopted to avoid double sampling. Inexact GN steps are analyzed so that one can safely and efficiently compute the GN updates by cheap matrix iterations. Under mild conditions, non-asymptotic finite-sample convergence to the globally optimal Q function is derived for various nonlinear function approximations. In particular, for neural network parameterization with relu activation, GNTD achieves an improved sample complexity of O~(ε−1)\tilde{\mathcal{O}}(\varepsilon^{-1})O~(ε−1), as opposed to the O(ε−2)\mathcal{\mathcal{O}}(\varepsilon^{-2})O(ε−2) sample complexity of the existing neural TD methods. An O~(ε−1.5)\tilde{\mathcal{O}}(\varepsilon^{-1.5})O~(ε−1.5) sample complexity of GNTD is also established for general smooth function approximations. We validate our method via extensive experiments in several RL benchmarks, where GNTD exhibits both higher rewards and faster convergence than TD-type methods.

View on arXiv
Comments on this paper