ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.07476
  4. Cited By
Combining Model-Based and Model-Free Methods for Nonlinear Control: A
  Provably Convergent Policy Gradient Approach

Combining Model-Based and Model-Free Methods for Nonlinear Control: A Provably Convergent Policy Gradient Approach

12 June 2020
Guannan Qu
Chenkai Yu
S. Low
Adam Wierman
ArXivPDFHTML

Papers citing "Combining Model-Based and Model-Free Methods for Nonlinear Control: A Provably Convergent Policy Gradient Approach"

3 / 3 papers shown
Title
Stabilizing Dynamical Systems via Policy Gradient Methods
Stabilizing Dynamical Systems via Policy Gradient Methods
Juan C. Perdomo
Jack Umenberger
Max Simchowitz
40
44
0
13 Oct 2021
Towards Robust Data-Driven Control Synthesis for Nonlinear Systems with
  Actuation Uncertainty
Towards Robust Data-Driven Control Synthesis for Nonlinear Systems with Actuation Uncertainty
Andrew J. Taylor
Victor D. Dorobantu
Sarah Dean
Benjamin Recht
Yisong Yue
Aaron D. Ames
27
35
0
21 Nov 2020
Scalable Reinforcement Learning for Multi-Agent Networked Systems
Scalable Reinforcement Learning for Multi-Agent Networked Systems
Guannan Qu
Adam Wierman
Na Li
16
33
0
05 Dec 2019
1