Understanding Pre-training and Fine-tuning from Loss Landscape Perspectives
Recent studies have revealed that the loss landscape of large language models resembles a basin, within which the models perform nearly identically, and outside of which they lose all their capabilities. In this work, we conduct further studies on the loss landscape of large language models. We discover that pre-training creates a "basic capability" basin, and subsequent fine-tuning creates "specific capability" basins (e.g., math, safety, coding) within the basic capability basin. We further investigate two types of loss landscapes: the most-case landscape (i.e., the landscape along most directions) and the worst-case landscape (i.e., the landscape along the worst direction). We argue that as long as benign fine-tuning remains within the most-case basin, it will not compromise previous capabilities. Similarly, any fine-tuning (including the adversarial one) that stays within the worst-case basin would not compromise previous capabilities. Finally, we theoretically demonstrate that the size of the most-case basin can bound the size of the worst-case basin and the robustness with respect to input perturbations. We also show that, due to the over-parameterization property of current large language models, one can easily enlarge the basins by five times.
View on arXiv@article{chen2025_2505.17646, title={ Understanding Pre-training and Fine-tuning from Loss Landscape Perspectives }, author={ Huanran Chen and Yinpeng Dong and Zeming Wei and Yao Huang and Yichi Zhang and Hang Su and Jun Zhu }, journal={arXiv preprint arXiv:2505.17646}, year={ 2025 } }