ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.07583
  4. Cited By
MoMo: Momentum Models for Adaptive Learning Rates
v1v2v3 (latest)

MoMo: Momentum Models for Adaptive Learning Rates

12 May 2023
Fabian Schaipp
Ruben Ohana
Michael Eickenberg
Aaron Defazio
Robert Mansel Gower
ArXiv (abs)PDFHTML

Papers citing "MoMo: Momentum Models for Adaptive Learning Rates"

4 / 4 papers shown
Title
How far away are truly hyperparameter-free learning algorithms?
How far away are truly hyperparameter-free learning algorithms?
Priya Kasimbeg
Vincent Roulet
Naman Agarwal
Sourabh Medapati
Fabian Pedregosa
Atish Agarwala
George E. Dahl
26
0
0
29 May 2025
Stochastic Polyak Step-sizes and Momentum: Convergence Guarantees and Practical Performance
Stochastic Polyak Step-sizes and Momentum: Convergence Guarantees and Practical Performance
Dimitris Oikonomou
Nicolas Loizou
95
6
0
06 Jun 2024
Enhancing Policy Gradient with the Polyak Step-Size Adaption
Enhancing Policy Gradient with the Polyak Step-Size Adaption
Yunxiang Li
Rui Yuan
Chen Fan
Mark Schmidt
Samuel Horváth
Robert Mansel Gower
Martin Takávc
72
0
0
11 Apr 2024
Implicit Bias and Fast Convergence Rates for Self-attention
Implicit Bias and Fast Convergence Rates for Self-attention
Bhavya Vasudeva
Puneesh Deora
Christos Thrampoulidis
114
21
0
08 Feb 2024
1