ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.16284
  4. Cited By
DoWG Unleashed: An Efficient Universal Parameter-Free Gradient Descent
  Method
v1v2v3v4 (latest)

DoWG Unleashed: An Efficient Universal Parameter-Free Gradient Descent Method

25 May 2023
Ahmed Khaled
Konstantin Mishchenko
Chi Jin
    ODL
ArXiv (abs)PDFHTML

Papers citing "DoWG Unleashed: An Efficient Universal Parameter-Free Gradient Descent Method"

13 / 13 papers shown
Title
LightSAM: Parameter-Agnostic Sharpness-Aware Minimization
LightSAM: Parameter-Agnostic Sharpness-Aware Minimization
Yifei Cheng
Li Shen
Hao Sun
Nan Yin
Xiaochun Cao
Enhong Chen
AAML
35
0
0
30 May 2025
How far away are truly hyperparameter-free learning algorithms?
How far away are truly hyperparameter-free learning algorithms?
Priya Kasimbeg
Vincent Roulet
Naman Agarwal
Sourabh Medapati
Fabian Pedregosa
Atish Agarwala
George E. Dahl
22
0
0
29 May 2025
AutoSGD: Automatic Learning Rate Selection for Stochastic Gradient Descent
AutoSGD: Automatic Learning Rate Selection for Stochastic Gradient Descent
Nikola Surjanovic
Alexandre Bouchard-Côté
Trevor Campbell
32
0
0
27 May 2025
Analysis of an Idealized Stochastic Polyak Method and its Application to Black-Box Model Distillation
Analysis of an Idealized Stochastic Polyak Method and its Application to Black-Box Model Distillation
Robert M. Gower
Guillaume Garrigos
Nicolas Loizou
Dimitris Oikonomou
Konstantin Mishchenko
Fabian Schaipp
83
1
0
02 Apr 2025
Towards hyperparameter-free optimization with differential privacy
Zhiqi Bu
Ruixuan Liu
87
2
0
02 Mar 2025
MARINA-P: Superior Performance in Non-smooth Federated Optimization with Adaptive Stepsizes
Igor Sokolov
Peter Richtárik
149
1
0
22 Dec 2024
Tuning-Free Coreset Markov Chain Monte Carlo via Hot DoG
Tuning-Free Coreset Markov Chain Monte Carlo via Hot DoG
Naitong Chen
Jonathan H. Huggins
Trevor Campbell
65
0
0
24 Oct 2024
Old Optimizer, New Norm: An Anthology
Old Optimizer, New Norm: An Anthology
Jeremy Bernstein
Laker Newhouse
ODL
129
26
0
30 Sep 2024
Learning-Rate-Free Stochastic Optimization over Riemannian Manifolds
Learning-Rate-Free Stochastic Optimization over Riemannian Manifolds
Daniel Dodd
Louis Sharrock
Christopher Nemeth
122
0
0
04 Jun 2024
Towards Stability of Parameter-free Optimization
Towards Stability of Parameter-free Optimization
Yijiang Pang
Shuyang Yu
Hoang Bao
Jiayu Zhou
66
1
0
07 May 2024
A simple uniformly optimal method without line search for convex
  optimization
A simple uniformly optimal method without line search for convex optimization
Tianjiao Li
Guanghui Lan
107
25
0
16 Oct 2023
Normalized Gradients for All
Normalized Gradients for All
Francesco Orabona
108
10
0
10 Aug 2023
Adaptive Proximal Gradient Method for Convex Optimization
Adaptive Proximal Gradient Method for Convex Optimization
Yura Malitsky
Konstantin Mishchenko
87
26
0
04 Aug 2023
1