ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.01049
30
0

Taming LLMs by Scaling Learning Rates with Gradient Grouping

1 June 2025
Siyuan Li
Juanxi Tian
Zedong Wang
Xin Jin
Zicheng Liu
Wentao Zhang
Dan Xu
ArXiv (abs)PDFHTML
Main:9 Pages
8 Figures
Bibliography:4 Pages
15 Tables
Appendix:7 Pages
Abstract

Training large language models (LLMs) poses challenges due to their massive scale and heterogeneous architectures. While adaptive optimizers like AdamW help address gradient variations, they still struggle with efficient and effective parameter-wise learning rate estimation, resulting in training instability, slow convergence, and poor compatibility with parameter-efficient fine-tuning (PEFT) techniques. This work introduces Scaling with Gradient Grouping (SGG), an optimizer wrapper that improves adaptive learning rate estimation by dynamic grouping and group-specific scaling. SGG first groups gradient statistics in each layer into clusters and then applies cluster-specific scaling to calibrate learning rates for each parameter, thus imposing collective group-wise constraints while maintaining precise per-parameter adaptation. Experiments on diverse (M)LLM benchmarks show that SGG integrates seamlessly with existing optimizers, and offers consistent gains and faster convergence over baselines, with various model sizes. Its stability across varying batch sizes and learning rates establishes SGG as a robust choice for LLM optimization.

View on arXiv
@article{li2025_2506.01049,
  title={ Taming LLMs by Scaling Learning Rates with Gradient Grouping },
  author={ Siyuan Li and Juanxi Tian and Zedong Wang and Xin Jin and Zicheng Liu and Wentao Zhang and Dan Xu },
  journal={arXiv preprint arXiv:2506.01049},
  year={ 2025 }
}
Comments on this paper