ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2304.05187
  4. Cited By
Automatic Gradient Descent: Deep Learning without Hyperparameters

Automatic Gradient Descent: Deep Learning without Hyperparameters

11 April 2023
Jeremy Bernstein
Chris Mingard
Kevin Huang
Navid Azizan
Yisong Yue
    ODL
ArXiv (abs)PDFHTMLGithub (207★)

Papers citing "Automatic Gradient Descent: Deep Learning without Hyperparameters"

4 / 4 papers shown
Title
Time Transfer: On Optimal Learning Rate and Batch Size In The Infinite Data Limit
Time Transfer: On Optimal Learning Rate and Batch Size In The Infinite Data Limit
Oleg Filatov
Jan Ebert
Jiangtao Wang
Stefan Kesselheim
118
4
0
10 Jan 2025
Infinite Limits of Multi-head Transformer Dynamics
Infinite Limits of Multi-head Transformer Dynamics
Blake Bordelon
Hamza Tahir Chaudhry
Cengiz Pehlevan
AI4CE
120
14
0
24 May 2024
A Spectral Condition for Feature Learning
A Spectral Condition for Feature Learning
Greg Yang
James B. Simon
Jeremy Bernstein
122
33
0
26 Oct 2023
Rethinking pose estimation in crowds: overcoming the detection
  information-bottleneck and ambiguity
Rethinking pose estimation in crowds: overcoming the detection information-bottleneck and ambiguity
Mu Zhou
Lucas Stoffl
Mackenzie W. Mathis
Alexander Mathis
VOT
80
21
0
13 Jun 2023
1