ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.10281
  4. Cited By
Variance Reduction via Accelerated Dual Averaging for Finite-Sum
  Optimization

Variance Reduction via Accelerated Dual Averaging for Finite-Sum Optimization

18 June 2020
Chaobing Song
Yong Jiang
Yi Ma
ArXivPDFHTML

Papers citing "Variance Reduction via Accelerated Dual Averaging for Finite-Sum Optimization"

6 / 6 papers shown
Title
Breaking the Lower Bound with (Little) Structure: Acceleration in
  Non-Convex Stochastic Optimization with Heavy-Tailed Noise
Breaking the Lower Bound with (Little) Structure: Acceleration in Non-Convex Stochastic Optimization with Heavy-Tailed Noise
Zijian Liu
Jiawei Zhang
Zhengyuan Zhou
32
12
0
14 Feb 2023
RECAPP: Crafting a More Efficient Catalyst for Convex Optimization
RECAPP: Crafting a More Efficient Catalyst for Convex Optimization
Y. Carmon
A. Jambulapati
Yujia Jin
Aaron Sidford
52
11
0
17 Jun 2022
Accelerating Perturbed Stochastic Iterates in Asynchronous Lock-Free
  Optimization
Accelerating Perturbed Stochastic Iterates in Asynchronous Lock-Free Optimization
Kaiwen Zhou
Anthony Man-Cho So
James Cheng
17
1
0
30 Sep 2021
SVRG Meets AdaGrad: Painless Variance Reduction
SVRG Meets AdaGrad: Painless Variance Reduction
Benjamin Dubois-Taine
Sharan Vaswani
Reza Babanezhad
Mark W. Schmidt
Simon Lacoste-Julien
18
17
0
18 Feb 2021
A Differential Equation for Modeling Nesterov's Accelerated Gradient
  Method: Theory and Insights
A Differential Equation for Modeling Nesterov's Accelerated Gradient Method: Theory and Insights
Weijie Su
Stephen P. Boyd
Emmanuel J. Candes
108
1,152
0
04 Mar 2015
A Proximal Stochastic Gradient Method with Progressive Variance
  Reduction
A Proximal Stochastic Gradient Method with Progressive Variance Reduction
Lin Xiao
Tong Zhang
ODL
84
736
0
19 Mar 2014
1