ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1507.02000
  4. Cited By
An optimal randomized incremental gradient method

An optimal randomized incremental gradient method

8 July 2015
Guanghui Lan
Yi Zhou
ArXivPDFHTML

Papers citing "An optimal randomized incremental gradient method"

35 / 35 papers shown
Title
Dynamic Anisotropic Smoothing for Noisy Derivative-Free Optimization
Dynamic Anisotropic Smoothing for Noisy Derivative-Free Optimization
S. Reifenstein
T. Leleu
Yoshihisa Yamamoto
48
1
0
02 May 2024
A simple uniformly optimal method without line search for convex
  optimization
A simple uniformly optimal method without line search for convex optimization
Tianjiao Li
Guanghui Lan
26
20
0
16 Oct 2023
DualFL: A Duality-based Federated Learning Algorithm with Communication
  Acceleration in the General Convex Regime
DualFL: A Duality-based Federated Learning Algorithm with Communication Acceleration in the General Convex Regime
Jongho Park
Jinchao Xu
FedML
60
1
0
17 May 2023
Stochastic Distributed Optimization under Average Second-order
  Similarity: Algorithms and Analysis
Stochastic Distributed Optimization under Average Second-order Similarity: Algorithms and Analysis
Dachao Lin
Yuze Han
Haishan Ye
Zhihua Zhang
25
11
0
15 Apr 2023
A principled framework for the design and analysis of token algorithms
A principled framework for the design and analysis of token algorithms
Hadrien Hendrikx
FedML
24
13
0
30 May 2022
Perseus: A Simple and Optimal High-Order Method for Variational
  Inequalities
Perseus: A Simple and Optimal High-Order Method for Variational Inequalities
Tianyi Lin
Michael I. Jordan
25
10
0
06 May 2022
No-Regret Dynamics in the Fenchel Game: A Unified Framework for
  Algorithmic Convex Optimization
No-Regret Dynamics in the Fenchel Game: A Unified Framework for Algorithmic Convex Optimization
Jun-Kun Wang
Jacob D. Abernethy
Kfir Y. Levy
24
21
0
22 Nov 2021
Stochastic Primal-Dual Deep Unrolling
Stochastic Primal-Dual Deep Unrolling
Junqi Tang
Subhadip Mukherjee
Carola-Bibiane Schönlieb
24
4
0
19 Oct 2021
ANITA: An Optimal Loopless Accelerated Variance-Reduced Gradient Method
ANITA: An Optimal Loopless Accelerated Variance-Reduced Gradient Method
Zhize Li
43
14
0
21 Mar 2021
Personalized Federated Learning: A Unified Framework and Universal
  Optimization Techniques
Personalized Federated Learning: A Unified Framework and Universal Optimization Techniques
Filip Hanzely
Boxin Zhao
Mladen Kolar
FedML
27
52
0
19 Feb 2021
First-Order Methods for Convex Optimization
First-Order Methods for Convex Optimization
Pavel Dvurechensky
Mathias Staudigl
Shimrit Shtern
ODL
31
25
0
04 Jan 2021
Optimal Algorithms for Convex Nested Stochastic Composite Optimization
Optimal Algorithms for Convex Nested Stochastic Composite Optimization
Zhe Zhang
Guanghui Lan
24
28
0
19 Nov 2020
Lower Bounds and Optimal Algorithms for Personalized Federated Learning
Lower Bounds and Optimal Algorithms for Personalized Federated Learning
Filip Hanzely
Slavomír Hanzely
Samuel Horváth
Peter Richtárik
FedML
50
186
0
05 Oct 2020
Optimization for Supervised Machine Learning: Randomized Algorithms for
  Data and Parameters
Optimization for Supervised Machine Learning: Randomized Algorithms for Data and Parameters
Filip Hanzely
34
0
0
26 Aug 2020
PAGE: A Simple and Optimal Probabilistic Gradient Estimator for
  Nonconvex Optimization
PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization
Zhize Li
Hongyan Bao
Xiangliang Zhang
Peter Richtárik
ODL
31
125
0
25 Aug 2020
Variance Reduction via Accelerated Dual Averaging for Finite-Sum
  Optimization
Variance Reduction via Accelerated Dual Averaging for Finite-Sum Optimization
Chaobing Song
Yong Jiang
Yi Ma
53
23
0
18 Jun 2020
Optimal Complexity in Decentralized Training
Optimal Complexity in Decentralized Training
Yucheng Lu
Christopher De Sa
38
72
0
15 Jun 2020
Variance Reduced Coordinate Descent with Acceleration: New Method With a
  Surprising Application to Finite-Sum Problems
Variance Reduced Coordinate Descent with Acceleration: New Method With a Surprising Application to Finite-Sum Problems
Filip Hanzely
D. Kovalev
Peter Richtárik
35
17
0
11 Feb 2020
The Practicality of Stochastic Optimization in Imaging Inverse Problems
The Practicality of Stochastic Optimization in Imaging Inverse Problems
Junqi Tang
K. Egiazarian
Mohammad Golbabaee
Mike Davies
27
30
0
22 Oct 2019
Stochastic First-order Methods for Convex and Nonconvex Functional
  Constrained Optimization
Stochastic First-order Methods for Convex and Nonconvex Functional Constrained Optimization
Digvijay Boob
Qi Deng
Guanghui Lan
44
93
0
07 Aug 2019
A Data Efficient and Feasible Level Set Method for Stochastic Convex
  Optimization with Expectation Constraints
A Data Efficient and Feasible Level Set Method for Stochastic Convex Optimization with Expectation Constraints
Qihang Lin
Selvaprabu Nadarajah
Negar Soheili
Tianbao Yang
27
13
0
07 Aug 2019
Asynchronous decentralized accelerated stochastic gradient descent
Asynchronous decentralized accelerated stochastic gradient descent
Guanghui Lan
Yi Zhou
13
15
0
24 Sep 2018
Stochastic Nested Variance Reduction for Nonconvex Optimization
Stochastic Nested Variance Reduction for Nonconvex Optimization
Dongruo Zhou
Pan Xu
Quanquan Gu
25
146
0
20 Jun 2018
Lower error bounds for the stochastic gradient descent optimization
  algorithm: Sharp convergence rates for slowly and fast decaying learning
  rates
Lower error bounds for the stochastic gradient descent optimization algorithm: Sharp convergence rates for slowly and fast decaying learning rates
Arnulf Jentzen
Philippe von Wurstemberger
73
31
0
22 Mar 2018
A Simple Proximal Stochastic Gradient Method for Nonsmooth Nonconvex
  Optimization
A Simple Proximal Stochastic Gradient Method for Nonsmooth Nonconvex Optimization
Zhize Li
Jian Li
39
116
0
13 Feb 2018
Katyusha X: Practical Momentum Method for Stochastic Sum-of-Nonconvex
  Optimization
Katyusha X: Practical Momentum Method for Stochastic Sum-of-Nonconvex Optimization
Zeyuan Allen-Zhu
ODL
44
52
0
12 Feb 2018
Communication-Efficient Algorithms for Decentralized and Stochastic
  Optimization
Communication-Efficient Algorithms for Decentralized and Stochastic Optimization
Guanghui Lan
Soomin Lee
Yi Zhou
40
218
0
14 Jan 2017
Sketching Meets Random Projection in the Dual: A Provable Recovery
  Algorithm for Big and High-dimensional Data
Sketching Meets Random Projection in the Dual: A Provable Recovery Algorithm for Big and High-dimensional Data
Jialei Wang
J. Lee
M. Mahdavi
Mladen Kolar
Nathan Srebro
21
50
0
10 Oct 2016
Stochastic Optimization with Variance Reduction for Infinite Datasets
  with Finite-Sum Structure
Stochastic Optimization with Variance Reduction for Infinite Datasets with Finite-Sum Structure
A. Bietti
Julien Mairal
44
36
0
04 Oct 2016
Dimension-Free Iteration Complexity of Finite Sum Optimization Problems
Dimension-Free Iteration Complexity of Finite Sum Optimization Problems
Yossi Arjevani
Ohad Shamir
16
24
0
30 Jun 2016
Tight Complexity Bounds for Optimizing Composite Objectives
Tight Complexity Bounds for Optimizing Composite Objectives
Blake E. Woodworth
Nathan Srebro
26
185
0
25 May 2016
Fast Stochastic Methods for Nonsmooth Nonconvex Optimization
Fast Stochastic Methods for Nonsmooth Nonconvex Optimization
Sashank J. Reddi
S. Sra
Barnabás Póczós
Alex Smola
20
54
0
23 May 2016
Katyusha: The First Direct Acceleration of Stochastic Gradient Methods
Katyusha: The First Direct Acceleration of Stochastic Gradient Methods
Zeyuan Allen-Zhu
ODL
15
575
0
18 Mar 2016
A Simple Practical Accelerated Method for Finite Sums
A Simple Practical Accelerated Method for Finite Sums
Aaron Defazio
25
120
0
08 Feb 2016
Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk
  Minimization
Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization
Yuchen Zhang
Xiao Lin
43
261
0
10 Sep 2014
1