ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.13733
  4. Cited By
On the Benefits of Large Learning Rates for Kernel Methods

On the Benefits of Large Learning Rates for Kernel Methods

28 February 2022
Gaspard Beugnot
Julien Mairal
Alessandro Rudi
ArXivPDFHTML

Papers citing "On the Benefits of Large Learning Rates for Kernel Methods"

5 / 5 papers shown
Title
Learning threshold neurons via the "edge of stability"
Learning threshold neurons via the "edge of stability"
Kwangjun Ahn
Sébastien Bubeck
Sinho Chewi
Y. Lee
Felipe Suarez
Yi Zhang
MLT
38
36
0
14 Dec 2022
SGD with Large Step Sizes Learns Sparse Features
SGD with Large Step Sizes Learns Sparse Features
Maksym Andriushchenko
Aditya Varre
Loucas Pillaud-Vivien
Nicolas Flammarion
45
56
0
11 Oct 2022
The Dynamics of Sharpness-Aware Minimization: Bouncing Across Ravines
  and Drifting Towards Wide Minima
The Dynamics of Sharpness-Aware Minimization: Bouncing Across Ravines and Drifting Towards Wide Minima
Peter L. Bartlett
Philip M. Long
Olivier Bousquet
76
34
0
04 Oct 2022
Stochastic Training is Not Necessary for Generalization
Stochastic Training is Not Necessary for Generalization
Jonas Geiping
Micah Goldblum
Phillip E. Pope
Michael Moeller
Tom Goldstein
89
72
0
29 Sep 2021
Catastrophic Fisher Explosion: Early Phase Fisher Matrix Impacts
  Generalization
Catastrophic Fisher Explosion: Early Phase Fisher Matrix Impacts Generalization
Stanislaw Jastrzebski
Devansh Arpit
Oliver Åstrand
Giancarlo Kerg
Huan Wang
Caiming Xiong
R. Socher
Kyunghyun Cho
Krzysztof J. Geras
AI4CE
184
65
0
28 Dec 2020
1