ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1605.08003
  4. Cited By
Tight Complexity Bounds for Optimizing Composite Objectives

Tight Complexity Bounds for Optimizing Composite Objectives

25 May 2016
Blake E. Woodworth
Nathan Srebro
ArXivPDFHTML

Papers citing "Tight Complexity Bounds for Optimizing Composite Objectives"

40 / 40 papers shown
Title
Tuning-Free Stochastic Optimization
Tuning-Free Stochastic Optimization
Ahmed Khaled
Chi Jin
32
7
0
12 Feb 2024
Memory-Query Tradeoffs for Randomized Convex Optimization
Memory-Query Tradeoffs for Randomized Convex Optimization
Xinyu Chen
Binghui Peng
36
6
0
21 Jun 2023
Stochastic Distributed Optimization under Average Second-order
  Similarity: Algorithms and Analysis
Stochastic Distributed Optimization under Average Second-order Similarity: Algorithms and Analysis
Dachao Lin
Yuze Han
Haishan Ye
Zhihua Zhang
25
11
0
15 Apr 2023
Sublinear Convergence Rates of Extragradient-Type Methods: A Survey on
  Classical and Recent Developments
Sublinear Convergence Rates of Extragradient-Type Methods: A Survey on Classical and Recent Developments
Quoc Tran-Dinh
35
7
0
30 Mar 2023
Bayesian Optimization for Function Compositions with Applications to
  Dynamic Pricing
Bayesian Optimization for Function Compositions with Applications to Dynamic Pricing
Kunal Jain
J. PrabuchandranK.
Tejas Bodas
27
2
0
21 Mar 2023
Stochastic Steffensen method
Stochastic Steffensen method
Minda Zhao
Zehua Lai
Lek-Heng Lim
ODL
15
3
0
28 Nov 2022
Adaptive Stochastic Variance Reduction for Non-convex Finite-Sum
  Minimization
Adaptive Stochastic Variance Reduction for Non-convex Finite-Sum Minimization
Ali Kavis
Stratis Skoulakis
Kimon Antonakopoulos
L. Dadi
V. Cevher
24
15
0
03 Nov 2022
RECAPP: Crafting a More Efficient Catalyst for Convex Optimization
RECAPP: Crafting a More Efficient Catalyst for Convex Optimization
Y. Carmon
A. Jambulapati
Yujia Jin
Aaron Sidford
55
11
0
17 Jun 2022
How catastrophic can catastrophic forgetting be in linear regression?
How catastrophic can catastrophic forgetting be in linear regression?
Itay Evron
E. Moroshko
Rachel A. Ward
Nati Srebro
Daniel Soudry
CLL
27
48
0
19 May 2022
Efficient Convex Optimization Requires Superlinear Memory
Efficient Convex Optimization Requires Superlinear Memory
A. Marsden
Vatsal Sharan
Aaron Sidford
Gregory Valiant
29
14
0
29 Mar 2022
Distributionally Robust Optimization via Ball Oracle Acceleration
Distributionally Robust Optimization via Ball Oracle Acceleration
Y. Carmon
Danielle Hausler
18
11
0
24 Mar 2022
Stochastic Primal-Dual Deep Unrolling
Stochastic Primal-Dual Deep Unrolling
Junqi Tang
Subhadip Mukherjee
Carola-Bibiane Schönlieb
24
4
0
19 Oct 2021
Accelerating Perturbed Stochastic Iterates in Asynchronous Lock-Free
  Optimization
Accelerating Perturbed Stochastic Iterates in Asynchronous Lock-Free Optimization
Kaiwen Zhou
Anthony Man-Cho So
James Cheng
19
1
0
30 Sep 2021
Stochastic Bias-Reduced Gradient Methods
Stochastic Bias-Reduced Gradient Methods
Hilal Asi
Y. Carmon
A. Jambulapati
Yujia Jin
Aaron Sidford
24
29
0
17 Jun 2021
The Complexity of Nonconvex-Strongly-Concave Minimax Optimization
The Complexity of Nonconvex-Strongly-Concave Minimax Optimization
Siqi Zhang
Junchi Yang
Cristóbal Guzmán
Negar Kiyavash
Niao He
33
61
0
29 Mar 2021
ANITA: An Optimal Loopless Accelerated Variance-Reduced Gradient Method
ANITA: An Optimal Loopless Accelerated Variance-Reduced Gradient Method
Zhize Li
43
14
0
21 Mar 2021
Machine Unlearning via Algorithmic Stability
Machine Unlearning via Algorithmic Stability
Enayat Ullah
Tung Mai
Anup B. Rao
Ryan Rossi
R. Arora
32
101
0
25 Feb 2021
Personalized Federated Learning: A Unified Framework and Universal
  Optimization Techniques
Personalized Federated Learning: A Unified Framework and Universal Optimization Techniques
Filip Hanzely
Boxin Zhao
Mladen Kolar
FedML
27
52
0
19 Feb 2021
Lower Bounds and Optimal Algorithms for Personalized Federated Learning
Lower Bounds and Optimal Algorithms for Personalized Federated Learning
Filip Hanzely
Slavomír Hanzely
Samuel Horváth
Peter Richtárik
FedML
50
186
0
05 Oct 2020
Optimization for Supervised Machine Learning: Randomized Algorithms for
  Data and Parameters
Optimization for Supervised Machine Learning: Randomized Algorithms for Data and Parameters
Filip Hanzely
34
0
0
26 Aug 2020
PAGE: A Simple and Optimal Probabilistic Gradient Estimator for
  Nonconvex Optimization
PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization
Zhize Li
Hongyan Bao
Xiangliang Zhang
Peter Richtárik
ODL
31
126
0
25 Aug 2020
Variance Reduction via Accelerated Dual Averaging for Finite-Sum
  Optimization
Variance Reduction via Accelerated Dual Averaging for Finite-Sum Optimization
Chaobing Song
Yong Jiang
Yi Ma
53
23
0
18 Jun 2020
Minibatch vs Local SGD for Heterogeneous Distributed Learning
Minibatch vs Local SGD for Heterogeneous Distributed Learning
Blake E. Woodworth
Kumar Kshitij Patel
Nathan Srebro
FedML
22
199
0
08 Jun 2020
Variance Reduced Coordinate Descent with Acceleration: New Method With a
  Surprising Application to Finite-Sum Problems
Variance Reduced Coordinate Descent with Acceleration: New Method With a Surprising Application to Finite-Sum Problems
Filip Hanzely
D. Kovalev
Peter Richtárik
35
17
0
11 Feb 2020
The Practicality of Stochastic Optimization in Imaging Inverse Problems
The Practicality of Stochastic Optimization in Imaging Inverse Problems
Junqi Tang
K. Egiazarian
Mohammad Golbabaee
Mike Davies
27
30
0
22 Oct 2019
Semi-Cyclic Stochastic Gradient Descent
Semi-Cyclic Stochastic Gradient Descent
Hubert Eichner
Tomer Koren
H. B. McMahan
Nathan Srebro
Kunal Talwar
24
106
0
23 Apr 2019
Memory-Sample Tradeoffs for Linear Regression with Small Error
Memory-Sample Tradeoffs for Linear Regression with Small Error
Vatsal Sharan
Aaron Sidford
Gregory Valiant
15
35
0
18 Apr 2019
Lower Bounds for Parallel and Randomized Convex Optimization
Lower Bounds for Parallel and Randomized Convex Optimization
Jelena Diakonikolas
Cristóbal Guzmán
33
44
0
05 Nov 2018
Parallelization does not Accelerate Convex Optimization: Adaptivity
  Lower Bounds for Non-smooth Convex Minimization
Parallelization does not Accelerate Convex Optimization: Adaptivity Lower Bounds for Non-smooth Convex Minimization
Eric Balkanski
Yaron Singer
22
31
0
12 Aug 2018
SPIDER: Near-Optimal Non-Convex Optimization via Stochastic Path
  Integrated Differential Estimator
SPIDER: Near-Optimal Non-Convex Optimization via Stochastic Path Integrated Differential Estimator
Cong Fang
C. J. Li
Zhouchen Lin
Tong Zhang
50
570
0
04 Jul 2018
Stochastic Nested Variance Reduction for Nonconvex Optimization
Stochastic Nested Variance Reduction for Nonconvex Optimization
Dongruo Zhou
Pan Xu
Quanquan Gu
25
146
0
20 Jun 2018
Tight Query Complexity Lower Bounds for PCA via Finite Sample Deformed
  Wigner Law
Tight Query Complexity Lower Bounds for PCA via Finite Sample Deformed Wigner Law
Max Simchowitz
A. Alaoui
Benjamin Recht
30
38
0
04 Apr 2018
Lower error bounds for the stochastic gradient descent optimization
  algorithm: Sharp convergence rates for slowly and fast decaying learning
  rates
Lower error bounds for the stochastic gradient descent optimization algorithm: Sharp convergence rates for slowly and fast decaying learning rates
Arnulf Jentzen
Philippe von Wurstemberger
73
31
0
22 Mar 2018
Leverage Score Sampling for Faster Accelerated Regression and ERM
Leverage Score Sampling for Faster Accelerated Regression and ERM
Naman Agarwal
Sham Kakade
Rahul Kidambi
Y. Lee
Praneeth Netrapalli
Aaron Sidford
26
21
0
22 Nov 2017
Doubly Accelerated Stochastic Variance Reduced Dual Averaging Method for
  Regularized Empirical Risk Minimization
Doubly Accelerated Stochastic Variance Reduced Dual Averaging Method for Regularized Empirical Risk Minimization
Tomoya Murata
Taiji Suzuki
OffRL
33
28
0
01 Mar 2017
Federated Optimization: Distributed Machine Learning for On-Device
  Intelligence
Federated Optimization: Distributed Machine Learning for On-Device Intelligence
Jakub Konecný
H. B. McMahan
Daniel Ramage
Peter Richtárik
FedML
60
1,878
0
08 Oct 2016
Less than a Single Pass: Stochastically Controlled Stochastic Gradient
  Method
Less than a Single Pass: Stochastically Controlled Stochastic Gradient Method
Lihua Lei
Michael I. Jordan
29
96
0
12 Sep 2016
Katyusha: The First Direct Acceleration of Stochastic Gradient Methods
Katyusha: The First Direct Acceleration of Stochastic Gradient Methods
Zeyuan Allen-Zhu
ODL
15
577
0
18 Mar 2016
An optimal randomized incremental gradient method
An optimal randomized incremental gradient method
Guanghui Lan
Yi Zhou
31
220
0
08 Jul 2015
Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk
  Minimization
Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization
Yuchen Zhang
Xiao Lin
43
261
0
10 Sep 2014
1