ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1603.05642
  4. Cited By
Optimal Black-Box Reductions Between Optimization Objectives

Optimal Black-Box Reductions Between Optimization Objectives

17 March 2016
Zeyuan Allen-Zhu
Elad Hazan
ArXivPDFHTML

Papers citing "Optimal Black-Box Reductions Between Optimization Objectives"

19 / 19 papers shown
Title
SAPPHIRE: Preconditioned Stochastic Variance Reduction for Faster Large-Scale Statistical Learning
Jingruo Sun
Zachary Frangella
Madeleine Udell
36
0
0
28 Jan 2025
Obtaining Lower Query Complexities through Lightweight Zeroth-Order
  Proximal Gradient Algorithms
Obtaining Lower Query Complexities through Lightweight Zeroth-Order Proximal Gradient Algorithms
Bin Gu
Xiyuan Wei
Hualin Zhang
Yi Chang
Heng-Chiao Huang
FedML
23
0
0
03 Oct 2024
Optimal Guarantees for Algorithmic Reproducibility and Gradient
  Complexity in Convex Optimization
Optimal Guarantees for Algorithmic Reproducibility and Gradient Complexity in Convex Optimization
Liang Zhang
Junchi Yang
Amin Karbasi
Niao He
34
2
0
26 Oct 2023
Impact of Redundancy on Resilience in Distributed Optimization and
  Learning
Impact of Redundancy on Resilience in Distributed Optimization and Learning
Shuo Liu
Nirupam Gupta
Nitin H. Vaidya
34
2
0
16 Nov 2022
Robust Regression Revisited: Acceleration and Improved Estimation Rates
Robust Regression Revisited: Acceleration and Improved Estimation Rates
A. Jambulapati
Jingkai Li
T. Schramm
Kevin Tian
AAML
32
17
0
22 Jun 2021
Asynchronous Distributed Optimization with Redundancy in Cost Functions
Asynchronous Distributed Optimization with Redundancy in Cost Functions
Shuo Liu
Nirupam Gupta
Nitin H. Vaidya
26
3
0
07 Jun 2021
First-Order Methods for Convex Optimization
First-Order Methods for Convex Optimization
Pavel Dvurechensky
Mathias Staudigl
Shimrit Shtern
ODL
31
25
0
04 Jan 2021
Variance Reduction via Accelerated Dual Averaging for Finite-Sum
  Optimization
Variance Reduction via Accelerated Dual Averaging for Finite-Sum Optimization
Chaobing Song
Yong Jiang
Yi Ma
53
23
0
18 Jun 2020
A Simple Stochastic Variance Reduced Algorithm with Fast Convergence
  Rates
A Simple Stochastic Variance Reduced Algorithm with Fast Convergence Rates
Kaiwen Zhou
Fanhua Shang
James Cheng
14
74
0
28 Jun 2018
Katyusha X: Practical Momentum Method for Stochastic Sum-of-Nonconvex
  Optimization
Katyusha X: Practical Momentum Method for Stochastic Sum-of-Nonconvex Optimization
Zeyuan Allen-Zhu
ODL
44
52
0
12 Feb 2018
Optimization Methods for Large-Scale Machine Learning
Optimization Methods for Large-Scale Machine Learning
Léon Bottou
Frank E. Curtis
J. Nocedal
90
3,176
0
15 Jun 2016
Tight Complexity Bounds for Optimizing Composite Objectives
Tight Complexity Bounds for Optimizing Composite Objectives
Blake E. Woodworth
Nathan Srebro
36
185
0
25 May 2016
Katyusha: The First Direct Acceleration of Stochastic Gradient Methods
Katyusha: The First Direct Acceleration of Stochastic Gradient Methods
Zeyuan Allen-Zhu
ODL
15
577
0
18 Mar 2016
Even Faster Accelerated Coordinate Descent Using Non-Uniform Sampling
Even Faster Accelerated Coordinate Descent Using Non-Uniform Sampling
Zeyuan Allen-Zhu
Zheng Qu
Peter Richtárik
Yang Yuan
44
172
0
30 Dec 2015
An optimal randomized incremental gradient method
An optimal randomized incremental gradient method
Guanghui Lan
Yi Zhou
34
220
0
08 Jul 2015
Improved SVRG for Non-Strongly-Convex or Sum-of-Non-Convex Objectives
Improved SVRG for Non-Strongly-Convex or Sum-of-Non-Convex Objectives
Zeyuan Allen-Zhu
Yang Yuan
29
195
0
05 Jun 2015
A Proximal Stochastic Gradient Method with Progressive Variance
  Reduction
A Proximal Stochastic Gradient Method with Progressive Variance Reduction
Lin Xiao
Tong Zhang
ODL
93
737
0
19 Mar 2014
Incremental Majorization-Minimization Optimization with Application to
  Large-Scale Machine Learning
Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning
Julien Mairal
79
317
0
18 Feb 2014
A simpler approach to obtaining an O(1/t) convergence rate for the
  projected stochastic subgradient method
A simpler approach to obtaining an O(1/t) convergence rate for the projected stochastic subgradient method
Simon Lacoste-Julien
Mark W. Schmidt
Francis R. Bach
128
259
0
10 Dec 2012
1