ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.12202
  4. Cited By
To Think or Not to Think: Exploring the Unthinking Vulnerability in Large Reasoning Models

To Think or Not to Think: Exploring the Unthinking Vulnerability in Large Reasoning Models

16 February 2025
Zihao Zhu
Hongbao Zhang
Mingda Zhang
Ruotong Wang
Guanzong Wu
Ke Xu
    AAML
    LRM
ArXivPDFHTML

Papers citing "To Think or Not to Think: Exploring the Unthinking Vulnerability in Large Reasoning Models"

3 / 3 papers shown
Title
Safety in Large Reasoning Models: A Survey
Safety in Large Reasoning Models: A Survey
Cheng Wang
Yong Liu
Yangqiu Song
Duzhen Zhang
ZeLin Li
Junfeng Fang
Bryan Hooi
LRM
159
1
0
24 Apr 2025
ShadowCoT: Cognitive Hijacking for Stealthy Reasoning Backdoors in LLMs
ShadowCoT: Cognitive Hijacking for Stealthy Reasoning Backdoors in LLMs
Gejian Zhao
Hanzhou Wu
Xinpeng Zhang
Athanasios V. Vasilakos
LRM
38
1
0
08 Apr 2025
Safety Tax: Safety Alignment Makes Your Large Reasoning Models Less Reasonable
Tiansheng Huang
Sihao Hu
Fatih Ilhan
Selim Furkan Tekin
Zachary Yahn
Yichang Xu
Ling Liu
53
9
0
01 Mar 2025
1