ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.11027
  4. Cited By
A Simple Stochastic Variance Reduced Algorithm with Fast Convergence
  Rates

A Simple Stochastic Variance Reduced Algorithm with Fast Convergence Rates

28 June 2018
Kaiwen Zhou
Fanhua Shang
James Cheng
ArXiv (abs)PDFHTML

Papers citing "A Simple Stochastic Variance Reduced Algorithm with Fast Convergence Rates"

22 / 22 papers shown
Title
OptEx: Expediting First-Order Optimization with Approximately
  Parallelized Iterations
OptEx: Expediting First-Order Optimization with Approximately Parallelized Iterations
Yao Shu
Jiongfeng Fang
Y. He
Fei Richard Yu
72
0
0
18 Feb 2024
Composite federated learning with heterogeneous data
Composite federated learning with heterogeneous data
Jiaojiao Zhang
Jiang Hu
Mikael Johansson
FedML
78
4
0
04 Sep 2023
Accelerating Perturbed Stochastic Iterates in Asynchronous Lock-Free
  Optimization
Accelerating Perturbed Stochastic Iterates in Asynchronous Lock-Free Optimization
Kaiwen Zhou
Anthony Man-Cho So
James Cheng
76
1
0
30 Sep 2021
Asynchronous Stochastic Optimization Robust to Arbitrary Delays
Asynchronous Stochastic Optimization Robust to Arbitrary Delays
Alon Cohen
Amit Daniely
Yoel Drori
Tomer Koren
Mariano Schain
102
33
0
22 Jun 2021
Practical Schemes for Finding Near-Stationary Points of Convex
  Finite-Sums
Practical Schemes for Finding Near-Stationary Points of Convex Finite-Sums
Kaiwen Zhou
Lai Tian
Anthony Man-Cho So
James Cheng
70
10
0
25 May 2021
Distributed Learning Systems with First-order Methods
Distributed Learning Systems with First-order Methods
Ji Liu
Ce Zhang
36
44
0
12 Apr 2021
Variance Reduction via Primal-Dual Accelerated Dual Averaging for
  Nonsmooth Convex Finite-Sums
Variance Reduction via Primal-Dual Accelerated Dual Averaging for Nonsmooth Convex Finite-Sums
Chaobing Song
Stephen J. Wright
Jelena Diakonikolas
142
17
0
26 Feb 2021
Lower Bounds and Optimal Algorithms for Personalized Federated Learning
Lower Bounds and Optimal Algorithms for Personalized Federated Learning
Filip Hanzely
Slavomír Hanzely
Samuel Horváth
Peter Richtárik
FedML
137
190
0
05 Oct 2020
Variance Reduction via Accelerated Dual Averaging for Finite-Sum
  Optimization
Variance Reduction via Accelerated Dual Averaging for Finite-Sum Optimization
Chaobing Song
Yong Jiang
Yi-An Ma
171
23
0
18 Jun 2020
Stochastic batch size for adaptive regularization in deep network
  optimization
Stochastic batch size for adaptive regularization in deep network optimization
Kensuke Nakamura
Stefano Soatto
Byung-Woo Hong
ODL
54
6
0
14 Apr 2020
Variance Reduced Coordinate Descent with Acceleration: New Method With a
  Surprising Application to Finite-Sum Problems
Variance Reduced Coordinate Descent with Acceleration: New Method With a Surprising Application to Finite-Sum Problems
Filip Hanzely
D. Kovalev
Peter Richtárik
84
17
0
11 Feb 2020
The Practicality of Stochastic Optimization in Imaging Inverse Problems
The Practicality of Stochastic Optimization in Imaging Inverse Problems
Junqi Tang
K. Egiazarian
Mohammad Golbabaee
Mike Davies
74
32
0
22 Oct 2019
Adaptive Weight Decay for Deep Neural Networks
Adaptive Weight Decay for Deep Neural Networks
Kensuke Nakamura
Byung-Woo Hong
63
43
0
21 Jul 2019
A Hybrid Stochastic Optimization Framework for Stochastic Composite
  Nonconvex Optimization
A Hybrid Stochastic Optimization Framework for Stochastic Composite Nonconvex Optimization
Quoc Tran-Dinh
Nhan H. Pham
T. Dzung
Lam M. Nguyen
80
51
0
08 Jul 2019
A Generic Acceleration Framework for Stochastic Composite Optimization
A Generic Acceleration Framework for Stochastic Composite Optimization
A. Kulunchakov
Julien Mairal
105
44
0
03 Jun 2019
Convergence of Distributed Stochastic Variance Reduced Methods without
  Sampling Extra Data
Convergence of Distributed Stochastic Variance Reduced Methods without Sampling Extra Data
Shicong Cen
Huishuai Zhang
Yuejie Chi
Wei-neng Chen
Tie-Yan Liu
FedML
111
27
0
29 May 2019
One Method to Rule Them All: Variance Reduction for Data, Parameters and
  Many New Methods
One Method to Rule Them All: Variance Reduction for Data, Parameters and Many New Methods
Filip Hanzely
Peter Richtárik
101
27
0
27 May 2019
Estimate Sequences for Stochastic Composite Optimization: Variance
  Reduction, Acceleration, and Robustness to Noise
Estimate Sequences for Stochastic Composite Optimization: Variance Reduction, Acceleration, and Robustness to Noise
A. Kulunchakov
Julien Mairal
111
45
0
25 Jan 2019
Don't Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are
  Better Without the Outer Loop
Don't Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are Better Without the Outer Loop
D. Kovalev
Samuel Horváth
Peter Richtárik
129
156
0
24 Jan 2019
Direct Acceleration of SAGA using Sampled Negative Momentum
Direct Acceleration of SAGA using Sampled Negative Momentum
Kaiwen Zhou
106
45
0
28 Jun 2018
VR-SGD: A Simple Stochastic Variance Reduction Method for Machine
  Learning
VR-SGD: A Simple Stochastic Variance Reduction Method for Machine Learning
Fanhua Shang
Kaiwen Zhou
Hongying Liu
James Cheng
Ivor W. Tsang
Lijun Zhang
Dacheng Tao
L. Jiao
98
67
0
26 Feb 2018
Momentum and Stochastic Momentum for Stochastic Gradient, Newton,
  Proximal Point and Subspace Descent Methods
Momentum and Stochastic Momentum for Stochastic Gradient, Newton, Proximal Point and Subspace Descent Methods
Nicolas Loizou
Peter Richtárik
82
204
0
27 Dec 2017
1