ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.02850
  4. Cited By
Stochastic Gradient Descent with Dependent Data for Offline
  Reinforcement Learning

Stochastic Gradient Descent with Dependent Data for Offline Reinforcement Learning

6 February 2022
Jing-rong Dong
Xin T. Tong
    OffRL
ArXivPDFHTML

Papers citing "Stochastic Gradient Descent with Dependent Data for Offline Reinforcement Learning"

3 / 3 papers shown
Title
Constant Stepsize Q-learning: Distributional Convergence, Bias and
  Extrapolation
Constant Stepsize Q-learning: Distributional Convergence, Bias and Extrapolation
Yixuan Zhang
Qiaomin Xie
35
4
0
25 Jan 2024
Convergence of Adam for Non-convex Objectives: Relaxed Hyperparameters
  and Non-ergodic Case
Convergence of Adam for Non-convex Objectives: Relaxed Hyperparameters and Non-ergodic Case
Meixuan He
Yuqing Liang
Jinlan Liu
Dongpo Xu
25
8
0
20 Jul 2023
Offline Reinforcement Learning: Tutorial, Review, and Perspectives on
  Open Problems
Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems
Sergey Levine
Aviral Kumar
George Tucker
Justin Fu
OffRL
GP
343
1,963
0
04 May 2020
1