21
0

Towards Objective Fine-tuning: How LLMs' Prior Knowledge Causes Potential Poor Calibration?

Main:8 Pages
21 Figures
Bibliography:3 Pages
26 Tables
Appendix:13 Pages
Abstract

Fine-tuned Large Language Models (LLMs) often demonstrate poor calibration, with their confidence scores misaligned with actual performance. While calibration has been extensively studied in models trained from scratch, the impact of LLMs' prior knowledge on calibration during fine-tuning remains understudied. Our research reveals that LLMs' prior knowledge causes potential poor calibration due to the ubiquitous presence of known data in real-world fine-tuning, which appears harmful for calibration. Specifically, data aligned with LLMs' prior knowledge would induce overconfidence, while new knowledge improves calibration. Our findings expose a tension: LLMs' encyclopedic knowledge, while enabling task versatility, undermines calibration through unavoidable knowledge overlaps. To address this, we propose CogCalib, a cognition-aware framework that applies targeted learning strategies according to the model's prior knowledge. Experiments across 7 tasks using 3 LLM families prove that CogCalib significantly improves calibration while maintaining performance, achieving an average 57\% reduction in ECE compared to standard fine-tuning in Llama3-8B. These improvements generalize well to out-of-domain tasks, enhancing the objectivity and reliability of domain-specific LLMs, and making them more trustworthy for critical human-AI interaction applications.

View on arXiv
@article{wang2025_2505.20903,
  title={ Towards Objective Fine-tuning: How LLMs' Prior Knowledge Causes Potential Poor Calibration? },
  author={ Ziming Wang and Zeyu Shi and Haoyi Zhou and Shiqi Gao and Qingyun Sun and Jianxin Li },
  journal={arXiv preprint arXiv:2505.20903},
  year={ 2025 }
}
Comments on this paper