ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2301.02268
  4. Cited By
Restarts subject to approximate sharpness: A parameter-free and optimal
  scheme for first-order methods

Restarts subject to approximate sharpness: A parameter-free and optimal scheme for first-order methods

5 January 2023
Ben Adcock
Matthew J. Colbrook
Maksym Neyra-Nesterenko
ArXivPDFHTML

Papers citing "Restarts subject to approximate sharpness: A parameter-free and optimal scheme for first-order methods"

5 / 5 papers shown
Title
Learning smooth functions in high dimensions: from sparse polynomials to
  deep neural networks
Learning smooth functions in high dimensions: from sparse polynomials to deep neural networks
Ben Adcock
Simone Brugiapaglia
N. Dexter
S. Moraga
42
4
0
04 Apr 2024
Implicit regularization in AI meets generalized hardness of
  approximation in optimization -- Sharp results for diagonal linear networks
Implicit regularization in AI meets generalized hardness of approximation in optimization -- Sharp results for diagonal linear networks
J. S. Wind
Vegard Antun
A. Hansen
25
4
0
13 Jul 2023
Acceleration Methods
Acceleration Methods
Alexandre d’Aspremont
Damien Scieur
Adrien B. Taylor
173
119
0
23 Jan 2021
Linear Convergence of Gradient and Proximal-Gradient Methods Under the
  Polyak-Łojasiewicz Condition
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
Hamed Karimi
J. Nutini
Mark W. Schmidt
139
1,201
0
16 Aug 2016
A Differential Equation for Modeling Nesterov's Accelerated Gradient
  Method: Theory and Insights
A Differential Equation for Modeling Nesterov's Accelerated Gradient Method: Theory and Insights
Weijie Su
Stephen P. Boyd
Emmanuel J. Candes
108
1,154
0
04 Mar 2015
1