Chain-of-Thought Prompting Obscures Hallucination Cues in Large Language Models: An Empirical Evaluation
- HILMLRM

Large Language Models (LLMs) often exhibit \textit{hallucinations}, generating factually incorrect or semantically irrelevant content in response to prompts. Chain-of-Thought (CoT) prompting can mitigate hallucinations by encouraging step-by-step reasoning, but its impact on hallucination detection remains underexplored. To bridge this gap, we conduct a systematic empirical evaluation. We begin with a pilot experiment, revealing that CoT reasoning significantly affects the LLM's internal states and token probability distributions. Building on this, we evaluate the impact of various CoT prompting methods on mainstream hallucination detection methods across both instruction-tuned and reasoning-oriented LLMs. Specifically, we examine three key dimensions: changes in hallucination score distributions, variations in detection accuracy, and shifts in detection confidence. Our findings show that while CoT prompting helps reduce hallucination frequency, it also tends to obscure critical signals used for detection, impairing the effectiveness of various detection methods. Our study highlights an overlooked trade-off in the use of reasoning. Code is publicly available at:this https URL.
View on arXiv@article{cheng2025_2506.17088, title={ Chain-of-Thought Prompting Obscures Hallucination Cues in Large Language Models: An Empirical Evaluation }, author={ Jiahao Cheng and Tiancheng Su and Jia Yuan and Guoxiu He and Jiawei Liu and Xinqi Tao and Jingwen Xie and Huaxia Li }, journal={arXiv preprint arXiv:2506.17088}, year={ 2025 } }