ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.11027
  4. Cited By
A Simple Stochastic Variance Reduced Algorithm with Fast Convergence
  Rates

A Simple Stochastic Variance Reduced Algorithm with Fast Convergence Rates

28 June 2018
Kaiwen Zhou
Fanhua Shang
James Cheng
ArXivPDFHTML

Papers citing "A Simple Stochastic Variance Reduced Algorithm with Fast Convergence Rates"

35 / 35 papers shown
Title
Efficient Algorithms for Empirical Group Distributional Robust
  Optimization and Beyond
Efficient Algorithms for Empirical Group Distributional Robust Optimization and Beyond
Dingzhi Yu
Yu-yan Cai
Wei Jiang
Lijun Zhang
49
6
0
06 Mar 2024
OptEx: Expediting First-Order Optimization with Approximately
  Parallelized Iterations
OptEx: Expediting First-Order Optimization with Approximately Parallelized Iterations
Yao Shu
Jiongfeng Fang
Y. He
Fei Richard Yu
35
0
0
18 Feb 2024
Composite federated learning with heterogeneous data
Composite federated learning with heterogeneous data
Jiaojiao Zhang
Jiang Hu
Mikael Johansson
FedML
32
4
0
04 Sep 2023
Pareto Invariant Risk Minimization: Towards Mitigating the Optimization
  Dilemma in Out-of-Distribution Generalization
Pareto Invariant Risk Minimization: Towards Mitigating the Optimization Dilemma in Out-of-Distribution Generalization
Yongqiang Chen
Kaiwen Zhou
Yatao Bian
Binghui Xie
Bing Wu
...
Kaili Ma
Han Yang
P. Zhao
Bo Han
James Cheng
OOD
OODD
11
34
0
15 Jun 2022
Distributed Dynamic Safe Screening Algorithms for Sparse Regularization
Distributed Dynamic Safe Screening Algorithms for Sparse Regularization
Runxue Bao
Xidong Wu
Wenhan Xian
Heng-Chiao Huang
31
1
0
23 Apr 2022
Accelerating Perturbed Stochastic Iterates in Asynchronous Lock-Free
  Optimization
Accelerating Perturbed Stochastic Iterates in Asynchronous Lock-Free Optimization
Kaiwen Zhou
Anthony Man-Cho So
James Cheng
21
1
0
30 Sep 2021
Asynchronous Stochastic Optimization Robust to Arbitrary Delays
Asynchronous Stochastic Optimization Robust to Arbitrary Delays
Alon Cohen
Amit Daniely
Yoel Drori
Tomer Koren
Mariano Schain
24
32
0
22 Jun 2021
Practical Schemes for Finding Near-Stationary Points of Convex
  Finite-Sums
Practical Schemes for Finding Near-Stationary Points of Convex Finite-Sums
Kaiwen Zhou
Lai Tian
Anthony Man-Cho So
James Cheng
20
10
0
25 May 2021
Distributed Learning Systems with First-order Methods
Distributed Learning Systems with First-order Methods
Ji Liu
Ce Zhang
16
44
0
12 Apr 2021
Variance Reduction via Primal-Dual Accelerated Dual Averaging for
  Nonsmooth Convex Finite-Sums
Variance Reduction via Primal-Dual Accelerated Dual Averaging for Nonsmooth Convex Finite-Sums
Chaobing Song
Stephen J. Wright
Jelena Diakonikolas
75
16
0
26 Feb 2021
Regularization in network optimization via trimmed stochastic gradient
  descent with noisy label
Regularization in network optimization via trimmed stochastic gradient descent with noisy label
Kensuke Nakamura
Bong-Soo Sohn
Kyoung-Jae Won
Byung-Woo Hong
NoLa
13
0
0
21 Dec 2020
Lower Bounds and Optimal Algorithms for Personalized Federated Learning
Lower Bounds and Optimal Algorithms for Personalized Federated Learning
Filip Hanzely
Slavomír Hanzely
Samuel Horváth
Peter Richtárik
FedML
50
187
0
05 Oct 2020
Asynchronous Distributed Optimization with Stochastic Delays
Asynchronous Distributed Optimization with Stochastic Delays
Margalit Glasgow
Mary Wootters
17
3
0
22 Sep 2020
Random extrapolation for primal-dual coordinate descent
Random extrapolation for primal-dual coordinate descent
Ahmet Alacaoglu
Olivier Fercoq
V. Cevher
16
16
0
13 Jul 2020
Variance Reduction via Accelerated Dual Averaging for Finite-Sum
  Optimization
Variance Reduction via Accelerated Dual Averaging for Finite-Sum Optimization
Chaobing Song
Yong Jiang
Yi Ma
53
23
0
18 Jun 2020
Boosting First-Order Methods by Shifting Objective: New Schemes with
  Faster Worst-Case Rates
Boosting First-Order Methods by Shifting Objective: New Schemes with Faster Worst-Case Rates
Kaiwen Zhou
Anthony Man-Cho So
James Cheng
14
5
0
25 May 2020
Stochastic batch size for adaptive regularization in deep network
  optimization
Stochastic batch size for adaptive regularization in deep network optimization
Kensuke Nakamura
Stefano Soatto
Byung-Woo Hong
ODL
27
6
0
14 Apr 2020
Variance Reduced Coordinate Descent with Acceleration: New Method With a
  Surprising Application to Finite-Sum Problems
Variance Reduced Coordinate Descent with Acceleration: New Method With a Surprising Application to Finite-Sum Problems
Filip Hanzely
D. Kovalev
Peter Richtárik
35
17
0
11 Feb 2020
Efficient Relaxed Gradient Support Pursuit for Sparsity Constrained
  Non-convex Optimization
Efficient Relaxed Gradient Support Pursuit for Sparsity Constrained Non-convex Optimization
Fanhua Shang
Bingkun Wei
Hongying Liu
Yuanyuan Liu
Jiacheng Zhuo
11
1
0
02 Dec 2019
The Practicality of Stochastic Optimization in Imaging Inverse Problems
The Practicality of Stochastic Optimization in Imaging Inverse Problems
Junqi Tang
K. Egiazarian
Mohammad Golbabaee
Mike Davies
27
30
0
22 Oct 2019
Randomized Iterative Methods for Linear Systems: Momentum, Inexactness
  and Gossip
Randomized Iterative Methods for Linear Systems: Momentum, Inexactness and Gossip
Nicolas Loizou
27
5
0
26 Sep 2019
Adaptive Weight Decay for Deep Neural Networks
Adaptive Weight Decay for Deep Neural Networks
Kensuke Nakamura
Byung-Woo Hong
6
41
0
21 Jul 2019
A Hybrid Stochastic Optimization Framework for Stochastic Composite
  Nonconvex Optimization
A Hybrid Stochastic Optimization Framework for Stochastic Composite Nonconvex Optimization
Quoc Tran-Dinh
Nhan H. Pham
T. Dzung
Lam M. Nguyen
27
49
0
08 Jul 2019
A Generic Acceleration Framework for Stochastic Composite Optimization
A Generic Acceleration Framework for Stochastic Composite Optimization
A. Kulunchakov
Julien Mairal
18
43
0
03 Jun 2019
Convergence of Distributed Stochastic Variance Reduced Methods without
  Sampling Extra Data
Convergence of Distributed Stochastic Variance Reduced Methods without Sampling Extra Data
Shicong Cen
Huishuai Zhang
Yuejie Chi
Wei-neng Chen
Tie-Yan Liu
FedML
14
27
0
29 May 2019
One Method to Rule Them All: Variance Reduction for Data, Parameters and
  Many New Methods
One Method to Rule Them All: Variance Reduction for Data, Parameters and Many New Methods
Filip Hanzely
Peter Richtárik
21
26
0
27 May 2019
Estimate Sequences for Variance-Reduced Stochastic Composite
  Optimization
Estimate Sequences for Variance-Reduced Stochastic Composite Optimization
A. Kulunchakov
Julien Mairal
8
27
0
07 May 2019
Estimate Sequences for Stochastic Composite Optimization: Variance
  Reduction, Acceleration, and Robustness to Noise
Estimate Sequences for Stochastic Composite Optimization: Variance Reduction, Acceleration, and Robustness to Noise
A. Kulunchakov
Julien Mairal
32
44
0
25 Jan 2019
Don't Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are
  Better Without the Outer Loop
Don't Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are Better Without the Outer Loop
D. Kovalev
Samuel Horváth
Peter Richtárik
36
155
0
24 Jan 2019
ASVRG: Accelerated Proximal SVRG
ASVRG: Accelerated Proximal SVRG
Fanhua Shang
L. Jiao
Kaiwen Zhou
James Cheng
Yan Ren
Yufei Jin
ODL
29
30
0
07 Oct 2018
Direct Acceleration of SAGA using Sampled Negative Momentum
Direct Acceleration of SAGA using Sampled Negative Momentum
Kaiwen Zhou
13
45
0
28 Jun 2018
VR-SGD: A Simple Stochastic Variance Reduction Method for Machine
  Learning
VR-SGD: A Simple Stochastic Variance Reduction Method for Machine Learning
Fanhua Shang
Kaiwen Zhou
Hongying Liu
James Cheng
Ivor W. Tsang
Lijun Zhang
Dacheng Tao
L. Jiao
29
65
0
26 Feb 2018
Momentum and Stochastic Momentum for Stochastic Gradient, Newton,
  Proximal Point and Subspace Descent Methods
Momentum and Stochastic Momentum for Stochastic Gradient, Newton, Proximal Point and Subspace Descent Methods
Nicolas Loizou
Peter Richtárik
19
200
0
27 Dec 2017
Homotopy Smoothing for Non-Smooth Problems with Lower Complexity than
  $O(1/ε)$
Homotopy Smoothing for Non-Smooth Problems with Lower Complexity than O(1/ε)O(1/ε)O(1/ε)
Yi Tian Xu
Yan Yan
Qihang Lin
Tianbao Yang
52
25
0
13 Jul 2016
A Proximal Stochastic Gradient Method with Progressive Variance
  Reduction
A Proximal Stochastic Gradient Method with Progressive Variance Reduction
Lin Xiao
Tong Zhang
ODL
93
737
0
19 Mar 2014
1