ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.02507
40
1

ZClip: Adaptive Spike Mitigation for LLM Pre-Training

3 April 2025
Abhay Kumar
Louis Owen
Nilabhra Roy Chowdhury
Fabian Güra
    VLM
ArXivPDFHTML
Abstract

Training large language models (LLMs) presents numerous challenges, including gradient instability and loss spikes. These phenomena can lead to catastrophic divergence, requiring costly checkpoint restoration and data batch skipping. Traditional gradient clipping techniques, such as constant or norm-based methods, fail to address these issues effectively due to their reliance on fixed thresholds or heuristics, leading to inefficient learning and requiring frequent manual intervention. In this work, we propose ZClip, an adaptive gradient clipping algorithm that dynamically adjusts the clipping threshold based on statistical properties of gradient norms over time. Unlike prior reactive strategies, ZClip proactively adapts to training dynamics without making any prior assumptions on the scale and the temporal evolution of gradient norms. At its core, it leverages z-score-based anomaly detection to identify and mitigate large gradient spikes, preventing malignant loss spikes while not interfering with convergence otherwise. Our code is available at:this https URL.

View on arXiv
@article{kumar2025_2504.02507,
  title={ ZClip: Adaptive Spike Mitigation for LLM Pre-Training },
  author={ Abhay Kumar and Louis Owen and Nilabhra Roy Chowdhury and Fabian Güra },
  journal={arXiv preprint arXiv:2504.02507},
  year={ 2025 }
}
Comments on this paper