Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1906.11985
Cited By
Near-Optimal Methods for Minimizing Star-Convex Functions and Beyond
27 June 2019
Oliver Hinder
Aaron Sidford
N. Sohoni
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Near-Optimal Methods for Minimizing Star-Convex Functions and Beyond"
24 / 24 papers shown
Title
Minimisation of Quasar-Convex Functions Using Random Zeroth-Order Oracles
Amir Ali Farzin
Yuen-Man Pun
Iman Shames
31
0
0
04 May 2025
Expected Variational Inequalities
B. Zhang
Ioannis Anagnostides
Emanuel Tewolde
Ratip Emin Berker
Gabriele Farina
Vincent Conitzer
T. Sandholm
265
0
0
25 Feb 2025
Nesterov acceleration in benignly non-convex landscapes
Kanan Gupta
Stephan Wojtowytsch
42
2
0
10 Oct 2024
Demystifying SGD with Doubly Stochastic Gradients
Kyurae Kim
Joohwan Ko
Yian Ma
Jacob R. Gardner
55
0
0
03 Jun 2024
Mean-field underdamped Langevin dynamics and its spacetime discretization
Qiang Fu
Ashia Wilson
52
4
0
26 Dec 2023
Communication-Efficient Gradient Descent-Accent Methods for Distributed Variational Inequalities: Unified Analysis and Local Updates
Siqi Zhang
S. Choudhury
Sebastian U. Stich
Nicolas Loizou
FedML
28
3
0
08 Jun 2023
PRISE: Demystifying Deep Lucas-Kanade with Strongly Star-Convex Constraints for Multimodel Image Alignment
Yiqing Zhang
Xinming Huang
Ziming Zhang
38
4
0
21 Mar 2023
DoG is SGD's Best Friend: A Parameter-Free Dynamic Step Size Schedule
Maor Ivgi
Oliver Hinder
Y. Carmon
ODL
40
57
0
08 Feb 2023
Accelerated Riemannian Optimization: Handling Constraints with a Prox to Bound Geometric Penalties
David Martínez-Rubio
Sebastian Pokutta
25
9
0
26 Nov 2022
Spectral Regularization Allows Data-frugal Learning over Combinatorial Spaces
Amirali Aghazadeh
Nived Rajaraman
Tony Tu
Kannan Ramchandran
27
2
0
05 Oct 2022
SP2: A Second Order Stochastic Polyak Method
Shuang Li
W. Swartworth
Martin Takávc
Deanna Needell
Robert Mansel Gower
29
13
0
17 Jul 2022
On the Convergence to a Global Solution of Shuffling-Type Gradient Algorithms
Lam M. Nguyen
Trang H. Tran
34
2
0
13 Jun 2022
Sharper Utility Bounds for Differentially Private Models
Yilin Kang
Yong Liu
Jian Li
Weiping Wang
FedML
35
3
0
22 Apr 2022
A Local Convergence Theory for the Stochastic Gradient Descent Method in Non-Convex Optimization With Non-isolated Local Minima
Tae-Eon Ko
Xiantao Li
30
2
0
21 Mar 2022
Federated Minimax Optimization: Improved Convergence Analyses and Algorithms
Pranay Sharma
Rohan Panda
Gauri Joshi
P. Varshney
FedML
21
47
0
09 Mar 2022
Tackling benign nonconvexity with smoothing and stochastic gradients
Harsh Vardhan
Sebastian U. Stich
31
8
0
18 Feb 2022
Towards Noise-adaptive, Problem-adaptive (Accelerated) Stochastic Gradient Descent
Sharan Vaswani
Benjamin Dubois-Taine
Reza Babanezhad
53
11
0
21 Oct 2021
Improved Learning Rates for Stochastic Optimization: Two Theoretical Viewpoints
Shaojie Li
Yong Liu
26
13
0
19 Jul 2021
Stochastic Polyak Stepsize with a Moving Target
Robert Mansel Gower
Aaron Defazio
Michael G. Rabbat
32
17
0
22 Jun 2021
Quickly Finding a Benign Region via Heavy Ball Momentum in Non-Convex Optimization
Jun-Kun Wang
Jacob D. Abernethy
24
7
0
04 Oct 2020
SGD for Structured Nonconvex Functions: Learning Rates, Minibatching and Interpolation
Robert Mansel Gower
Othmane Sebbouh
Nicolas Loizou
27
74
0
18 Jun 2020
The Error-Feedback Framework: Better Rates for SGD with Delayed Gradients and Compressed Communication
Sebastian U. Stich
Sai Praneeth Karimireddy
FedML
25
20
0
11 Sep 2019
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
Hamed Karimi
J. Nutini
Mark Schmidt
139
1,205
0
16 Aug 2016
A Differential Equation for Modeling Nesterov's Accelerated Gradient Method: Theory and Insights
Weijie Su
Stephen P. Boyd
Emmanuel J. Candes
108
1,157
0
04 Mar 2015
1