86
0

Improved Supervised Fine-Tuning for Large Language Models to Mitigate Catastrophic Forgetting

Main:5 Pages
1 Figures
Bibliography:2 Pages
3 Tables
Appendix:1 Pages
Abstract

Supervised Fine-Tuning (SFT), while enhancing large language models(LLMs)' instruction-following capabilities and domain-specific task adaptability, often diminishes their general capabilities. Moreover, due to the inaccessibility of original pre-training data, catastrophic forgetting tends to be exacerbated when third-party practitioners implement SFT on open-sourced models. To address this challenge, we propose a novel, more cost-effective SFT method which could effectively reduce the risk of catastrophic forgetting without access to original SFT data. Our approach begins by reconstructing the likely SFT instruction distribution of the base model, followed by a multi-model screening process to select optimal data, which is then mixed with new data for SFT. Experimental results demonstrate that our method preserves generalization capabilities in general domains while improving task-specific performance.

View on arXiv
@article{ding2025_2506.09428,
  title={ Improved Supervised Fine-Tuning for Large Language Models to Mitigate Catastrophic Forgetting },
  author={ Fei Ding and Baiqiao Wang },
  journal={arXiv preprint arXiv:2506.09428},
  year={ 2025 }
}
Comments on this paper