ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.17704
151
1

Safety in Large Reasoning Models: A Survey

24 April 2025
Cheng Wang
Yong-Jin Liu
Yangqiu Song
Duzhen Zhang
ZeLin Li
Junfeng Fang
Bryan Hooi
    LRM
ArXivPDFHTML
Abstract

Large Reasoning Models (LRMs) have exhibited extraordinary prowess in tasks like mathematics and coding, leveraging their advanced reasoning capabilities. Nevertheless, as these capabilities progress, significant concerns regarding their vulnerabilities and safety have arisen, which can pose challenges to their deployment and application in real-world settings. This paper presents a comprehensive survey of LRMs, meticulously exploring and summarizing the newly emerged safety risks, attacks, and defense strategies. By organizing these elements into a detailed taxonomy, this work aims to offer a clear and structured understanding of the current safety landscape of LRMs, facilitating future research and development to enhance the security and reliability of these powerful models.

View on arXiv
@article{wang2025_2504.17704,
  title={ Safety in Large Reasoning Models: A Survey },
  author={ Cheng Wang and Yue Liu and Baolong Bi and Duzhen Zhang and Zhongzhi Li and Junfeng Fang and Bryan Hooi },
  journal={arXiv preprint arXiv:2504.17704},
  year={ 2025 }
}
Comments on this paper