Beyond Safe Answers: A Benchmark for Evaluating True Risk Awareness in Large Reasoning Models

Despite the remarkable proficiency of \textit{Large Reasoning Models} (LRMs) in handling complex reasoning tasks, their reliability in safety-critical scenarios remains uncertain. Existing evaluations primarily assess response-level safety, neglecting a critical issue we identify as \textbf{\textit{Superficial Safety Alignment} (SSA)} -- a phenomenon where models produce superficially safe outputs while internal reasoning processes fail to genuinely detect and mitigate underlying risks, resulting in inconsistent safety behaviors across multiple sampling attempts. To systematically investigate SSA, we introduce \textbf{Beyond Safe Answers (BSA)} bench, a novel benchmark comprising 2,000 challenging instances organized into three distinct SSA scenario types and spanning nine risk categories, each meticulously annotated with risk rationales. Evaluations of 19 state-of-the-art LRMs demonstrate the difficulty of this benchmark, with top-performing models achieving only 38.0\% accuracy in correctly identifying risk rationales. We further explore the efficacy of safety rules, specialized fine-tuning on safety reasoning data, and diverse decoding strategies in mitigating SSA. Our work provides a comprehensive assessment tool for evaluating and improving safety reasoning fidelity in LRMs, advancing the development of genuinely risk-aware and reliably safe AI systems.
View on arXiv@article{zheng2025_2505.19690, title={ Beyond Safe Answers: A Benchmark for Evaluating True Risk Awareness in Large Reasoning Models }, author={ Baihui Zheng and Boren Zheng and Kerui Cao and Yingshui Tan and Zhendong Liu and Weixun Wang and Jiaheng Liu and Jian Yang and Wenbo Su and Xiaoyong Zhu and Bo Zheng and Kaifu Zhang }, journal={arXiv preprint arXiv:2505.19690}, year={ 2025 } }