21

Inducing Epistemological Humility in Large Language Models: A Targeted SFT Approach to Reducing Hallucination

Cem Uluoglakci
Tugba Taskaya Temizel
Main:9 Pages
8 Figures
Bibliography:4 Pages
14 Tables
Appendix:5 Pages
Abstract

Large language models (LLMs) often hallucinate, producing fluent but false information, partly because supervised fine-tuning (SFT) implicitly rewards always responding. We introduce HypoTermInstruct\textit{HypoTermInstruct}, an SFT dataset (31,487 responses for 11,151 questions) designed to teach models epistemological humility-the ability to recognize the limits of their own knowledge and admit uncertainty. This is achieved through questions about non-existent "hypothetical" terms. We also release HypoTermQA-Enhanced\textit{HypoTermQA-Enhanced}, a benchmark for hallucination tendency strengthened through multiple validations. We conducted 800 controlled LoRA SFT runs across Llama3.1-8B\textit{Llama3.1-8B} and Gemma3-4B\textit{Gemma3-4B} (base and instruct), testing 100 fine-tuning configurations with paired controls. Our results demonstrate that replacing generic instruction data with HypoTermInstruct\textit{HypoTermInstruct} significantly improves the HypoTerm Score (median increases of 0.19% to 25.91%) and FactScore (+0.39% to +0.86%), while maintaining stable performance on MMLU (minimal decreases of 0.26% to 0.35%). Our work demonstrates that targeted, high-quality SFT data teaching meta-cognitive skills can effectively reduce hallucination without preference/RL pipelines, providing mechanistic insights and a practical path toward more reliable AI systems.

View on arXiv
Comments on this paper