7
0

Don't Make It Up: Preserving Ignorance Awareness in LLM Fine-Tuning

Main:3 Pages
6 Figures
Bibliography:3 Pages
3 Tables
Appendix:6 Pages
Abstract

Existing work on mitigating catastrophic forgetting in large language model (LLM) fine-tuning has primarily focused on preserving specific data or tasks, while critically overlooking the degradation of essential capabilities instilled through safety alignment, particularly the model's ability to faithfully express ignorance. In this work, we show that this capability is significantly degraded during conventional fine-tuning, leading to undesired behaviors such as hallucinations. To address this novel but highly practical problem, we propose SEAT, a simple and effective fine-tuning approach that preserves both fine-tuning performance and the model's inherent ability to acknowledge its ignorance. SEAT integrates two key components: (1) sparse training that constrains activation drift, and (2) a novel entity perturbation method with KL-divergence regularization, designed to counter knowledge entanglement. Experimental results demonstrate that SEAT significantly outperforms baselines in preserving ignorance awareness while retaining fine-tuning performance, offering a more robust solution for LLM fine-tuning.

View on arXiv
@article{shen2025_2506.14387,
  title={ Don't Make It Up: Preserving Ignorance Awareness in LLM Fine-Tuning },
  author={ William F. Shen and Xinchi Qiu and Nicola Cancedda and Nicholas D. Lane },
  journal={arXiv preprint arXiv:2506.14387},
  year={ 2025 }
}
Comments on this paper