ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.16377
  4. Cited By
Greedy-GQ with Variance Reduction: Finite-time Analysis and Improved
  Complexity

Greedy-GQ with Variance Reduction: Finite-time Analysis and Improved Complexity

30 March 2021
Shaocong Ma
Ziyi Chen
Yi Zhou
Shaofeng Zou
ArXivPDFHTML

Papers citing "Greedy-GQ with Variance Reduction: Finite-time Analysis and Improved Complexity"

8 / 8 papers shown
Title
Regularized Q-Learning with Linear Function Approximation
Regularized Q-Learning with Linear Function Approximation
Jiachen Xi
Alfredo Garcia
P. Momcilovic
76
2
0
26 Jan 2024
Variance Reduction for Deep Q-Learning using Stochastic Recursive
  Gradient
Variance Reduction for Deep Q-Learning using Stochastic Recursive Gradient
Hao Jia
Xiao Zhang
Jun Xu
Wei Zeng
Hao Jiang
Xiao Yan
Ji-Rong Wen
52
3
0
25 Jul 2020
Finite-Time Performance Bounds and Adaptive Learning Rate Selection for
  Two Time-Scale Reinforcement Learning
Finite-Time Performance Bounds and Adaptive Learning Rate Selection for Two Time-Scale Reinforcement Learning
Harsh Gupta
R. Srikant
Lei Ying
51
86
0
14 Jul 2019
Finite-Time Error Bounds For Linear Stochastic Approximation and TD
  Learning
Finite-Time Error Bounds For Linear Stochastic Approximation and TD Learning
R. Srikant
Lei Ying
61
252
0
03 Feb 2019
SPIDER: Near-Optimal Non-Convex Optimization via Stochastic Path
  Integrated Differential Estimator
SPIDER: Near-Optimal Non-Convex Optimization via Stochastic Path Integrated Differential Estimator
Cong Fang
C. J. Li
Zhouchen Lin
Tong Zhang
85
577
0
04 Jul 2018
A Simple Proximal Stochastic Gradient Method for Nonsmooth Nonconvex
  Optimization
A Simple Proximal Stochastic Gradient Method for Nonsmooth Nonconvex Optimization
Zhize Li
Jian Li
60
116
0
13 Feb 2018
Asynchronous Methods for Deep Reinforcement Learning
Asynchronous Methods for Deep Reinforcement Learning
Volodymyr Mnih
Adria Puigdomenech Badia
M. Berk Mirza
Alex Graves
Timothy Lillicrap
Tim Harley
David Silver
Koray Kavukcuoglu
189
8,833
0
04 Feb 2016
SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly
  Convex Composite Objectives
SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives
Aaron Defazio
Francis R. Bach
Simon Lacoste-Julien
ODL
128
1,823
0
01 Jul 2014
1