ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.05838
36
3

Time Transfer: On Optimal Learning Rate and Batch Size In The Infinite Data Limit

10 January 2025
Oleg Filatov
Jan Ebert
Jiangtao Wang
Stefan Kesselheim
ArXivPDFHTML
Abstract

One of the main challenges in optimal scaling of large language models (LLMs) is the prohibitive cost of hyperparameter tuning, particularly learning rate η\etaη and batch size BBB. While techniques like μ\muμP (Yang et al., 2022) provide scaling rules for optimal η\etaη transfer in the infinite model size limit, the optimal scaling behavior in the infinite data size limit remains unknown. We fill in this gap by observing for the first time an intricate dependence of optimal η\etaη scaling on the pretraining token budget TTT, BBB and its relation to the critical batch size BcritB_\mathrm{crit}Bcrit​, which we measure to evolve as Bcrit∝TB_\mathrm{crit} \propto TBcrit​∝T. Furthermore, we show that the optimal batch size is positively correlated with BcritB_\mathrm{crit}Bcrit​: keeping it fixed becomes suboptimal over time even if learning rate is scaled optimally. Surprisingly, our results demonstrate that the observed optimal η\etaη and BBB dynamics are preserved with μ\muμP model scaling, challenging the conventional view of BcritB_\mathrm{crit}Bcrit​ dependence solely on loss value. Complementing optimality, we examine the sensitivity of loss to changes in learning rate, where we find the sensitivity to decrease with increase of TTT and to remain constant with μ\muμP model scaling. We hope our results make the first step towards a unified picture of the joint optimal data and model scaling.

View on arXiv
Comments on this paper