ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.15806
  4. Cited By
A Mousetrap: Fooling Large Reasoning Models for Jailbreak with Chain of Iterative Chaos

A Mousetrap: Fooling Large Reasoning Models for Jailbreak with Chain of Iterative Chaos

19 February 2025
Yang Yao
Xuan Tong
Ruofan Wang
Yixu Wang
Lujundong Li
Liang Liu
Yan Teng
Yuxiang Wang
    LRM
ArXivPDFHTML

Papers citing "A Mousetrap: Fooling Large Reasoning Models for Jailbreak with Chain of Iterative Chaos"

2 / 2 papers shown
Title
Practical Reasoning Interruption Attacks on Reasoning Large Language Models
Practical Reasoning Interruption Attacks on Reasoning Large Language Models
Yu Cui
Cong Zuo
SILM
AAML
LRM
29
0
0
10 May 2025
Safety in Large Reasoning Models: A Survey
Safety in Large Reasoning Models: A Survey
Cheng Wang
Y. Liu
B. Li
Duzhen Zhang
Z. Li
Junfeng Fang
Bryan Hooi
LRM
151
1
0
24 Apr 2025
1