ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.01778
  4. Cited By
Thinking Inside the Ball: Near-Optimal Minimization of the Maximal Loss

Thinking Inside the Ball: Near-Optimal Minimization of the Maximal Loss

4 May 2021
Y. Carmon
A. Jambulapati
Yujia Jin
Aaron Sidford
ArXivPDFHTML

Papers citing "Thinking Inside the Ball: Near-Optimal Minimization of the Maximal Loss"

11 / 11 papers shown
Title
Lower Bounds for Non-Convex Stochastic Optimization
Lower Bounds for Non-Convex Stochastic Optimization
Yossi Arjevani
Y. Carmon
John C. Duchi
Dylan J. Foster
Nathan Srebro
Blake E. Woodworth
65
349
0
05 Dec 2019
Variance Reduction for Matrix Games
Variance Reduction for Matrix Games
Y. Carmon
Yujia Jin
Aaron Sidford
Kevin Tian
36
65
0
03 Jul 2019
Complexity of Highly Parallel Non-Smooth Convex Optimization
Complexity of Highly Parallel Non-Smooth Convex Optimization
Sébastien Bubeck
Qijia Jiang
Y. Lee
Yuanzhi Li
Aaron Sidford
42
55
0
25 Jun 2019
Unified Acceleration of High-Order Algorithms under Hölder
  Continuity and Uniform Convexity
Unified Acceleration of High-Order Algorithms under Hölder Continuity and Uniform Convexity
Chaobing Song
Yong Jiang
Yi Ma
265
19
0
03 Jun 2019
Lower Bounds for Parallel and Randomized Convex Optimization
Lower Bounds for Parallel and Randomized Convex Optimization
Jelena Diakonikolas
Cristóbal Guzmán
50
44
0
05 Nov 2018
SPIDER: Near-Optimal Non-Convex Optimization via Stochastic Path
  Integrated Differential Estimator
SPIDER: Near-Optimal Non-Convex Optimization via Stochastic Path Integrated Differential Estimator
Cong Fang
C. J. Li
Zhouchen Lin
Tong Zhang
79
572
0
04 Jul 2018
Tight Complexity Bounds for Optimizing Composite Objectives
Tight Complexity Bounds for Optimizing Composite Objectives
Blake E. Woodworth
Nathan Srebro
109
185
0
25 May 2016
Katyusha: The First Direct Acceleration of Stochastic Gradient Methods
Katyusha: The First Direct Acceleration of Stochastic Gradient Methods
Zeyuan Allen-Zhu
ODL
85
580
0
18 Mar 2016
Minimizing the Maximal Loss: How and Why?
Minimizing the Maximal Loss: How and Why?
Shai Shalev-Shwartz
Y. Wexler
26
81
0
04 Feb 2016
Un-regularizing: approximate proximal point and faster stochastic
  algorithms for empirical risk minimization
Un-regularizing: approximate proximal point and faster stochastic algorithms for empirical risk minimization
Roy Frostig
Rong Ge
Sham Kakade
Aaron Sidford
36
150
0
24 Jun 2015
Sublinear Optimization for Machine Learning
Sublinear Optimization for Machine Learning
K. Clarkson
Elad Hazan
David P. Woodruff
55
138
0
21 Oct 2010
1