ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.19575
21
2

On the Linear Convergence of Policy Gradient under Hadamard Parameterization

31 May 2023
Jiacai Liu
Jinchi Chen
Ke Wei
ArXivPDFHTML
Abstract

The convergence of deterministic policy gradient under the Hadamard parameterization is studied in the tabular setting and the linear convergence of the algorithm is established. To this end, we first show that the error decreases at an O(1k)O(\frac{1}{k})O(k1​) rate for all the iterations. Based on this result, we further show that the algorithm has a faster local linear convergence rate after k0k_0k0​ iterations, where k0k_0k0​ is a constant that only depends on the MDP problem and the initialization. To show the local linear convergence of the algorithm, we have indeed established the contraction of the sub-optimal probability bskb_s^kbsk​ (i.e., the probability of the output policy πk\pi^kπk on non-optimal actions) when k≥k0k\ge k_0k≥k0​.

View on arXiv
Comments on this paper