BARREL: Boundary-Aware Reasoning for Factual and Reliable LRMs

Recent advances in Large Reasoning Models (LRMs) have shown impressive capabilities in mathematical and logical reasoning. However, current LRMs rarely admit ignorance or respond with "I don't know". Instead, they often produce incorrect answers while showing undue confidence, raising concerns about their factual reliability. In this work, we identify two pathological reasoning patterns characterized by overthinking that contribute to the overconfident and incorrect answers: last-minute guessing and second-thought spiraling. To address these issues, we propose BARREL-a novel framework that promotes concise and boundary-aware factual reasoning. Our experiments show that BARREL-training increases the reliability of DeepSeek-R1-Distill-Llama-8B from 39.33% to 61.48%, while still achieving accuracy comparable to models finetuned on reasoning data generated by R1. These results demonstrate that our pilot study is inspiring to build more reliable and factual System 2 LRMs.
View on arXiv@article{yang2025_2505.13529, title={ BARREL: Boundary-Aware Reasoning for Factual and Reliable LRMs }, author={ Junxiao Yang and Jinzhe Tu and Haoran Liu and Xiaoce Wang and Chujie Zheng and Zhexin Zhang and Shiyao Cui and Caishun Chen and Tiantian He and Hongning Wang and Yew-Soon Ong and Minlie Huang }, journal={arXiv preprint arXiv:2505.13529}, year={ 2025 } }