ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.02346
  4. Cited By
Convergence and sample complexity of natural policy gradient primal-dual
  methods for constrained MDPs

Convergence and sample complexity of natural policy gradient primal-dual methods for constrained MDPs

6 June 2022
Dongsheng Ding
Kaipeng Zhang
Jiali Duan
Tamer Bacsar
Mihailo R. Jovanović
ArXivPDFHTML

Papers citing "Convergence and sample complexity of natural policy gradient primal-dual methods for constrained MDPs"

9 / 9 papers shown
Title
ActSafe: Active Exploration with Safety Constraints for Reinforcement Learning
ActSafe: Active Exploration with Safety Constraints for Reinforcement Learning
Yarden As
Bhavya Sukhija
Lenart Treven
Carmelo Sferrazza
Stelian Coros
Andreas Krause
33
1
0
12 Oct 2024
Identifiability and Generalizability in Constrained Inverse
  Reinforcement Learning
Identifiability and Generalizability in Constrained Inverse Reinforcement Learning
Andreas Schlaginhaufen
Maryam Kamgarpour
21
10
0
01 Jun 2023
OmniSafe: An Infrastructure for Accelerating Safe Reinforcement Learning
  Research
OmniSafe: An Infrastructure for Accelerating Safe Reinforcement Learning Research
Jiaming Ji
Jiayi Zhou
Borong Zhang
Juntao Dai
Xuehai Pan
Ruiyang Sun
Weidong Huang
Yiran Geng
Mickel Liu
Yaodong Yang
OffRL
72
47
0
16 May 2023
Constrained Update Projection Approach to Safe Policy Optimization
Constrained Update Projection Approach to Safe Policy Optimization
Long Yang
Jiaming Ji
Juntao Dai
Linrui Zhang
Binbin Zhou
Pengfei Li
Yaodong Yang
Gang Pan
38
43
0
15 Sep 2022
Finite-Time Complexity of Online Primal-Dual Natural Actor-Critic
  Algorithm for Constrained Markov Decision Processes
Finite-Time Complexity of Online Primal-Dual Natural Actor-Critic Algorithm for Constrained Markov Decision Processes
Sihan Zeng
Thinh T. Doan
Justin Romberg
102
17
0
21 Oct 2021
Achieving Zero Constraint Violation for Constrained Reinforcement
  Learning via Primal-Dual Approach
Achieving Zero Constraint Violation for Constrained Reinforcement Learning via Primal-Dual Approach
Qinbo Bai
Amrit Singh Bedi
Mridul Agarwal
Alec Koppel
Vaneet Aggarwal
107
56
0
13 Sep 2021
On Linear Convergence of Policy Gradient Methods for Finite MDPs
On Linear Convergence of Policy Gradient Methods for Finite MDPs
Jalaj Bhandari
Daniel Russo
59
59
0
21 Jul 2020
A simpler approach to obtaining an O(1/t) convergence rate for the
  projected stochastic subgradient method
A simpler approach to obtaining an O(1/t) convergence rate for the projected stochastic subgradient method
Simon Lacoste-Julien
Mark W. Schmidt
Francis R. Bach
126
259
0
10 Dec 2012
Stochastic Gradient Descent for Non-smooth Optimization: Convergence
  Results and Optimal Averaging Schemes
Stochastic Gradient Descent for Non-smooth Optimization: Convergence Results and Optimal Averaging Schemes
Ohad Shamir
Tong Zhang
101
571
0
08 Dec 2012
1