Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2505.16765
Cited By
When Safety Detectors Aren't Enough: A Stealthy and Effective Jailbreak Attack on LLMs via Steganographic Techniques
22 May 2025
Jianing Geng
Biao Yi
Zekun Fei
Tongxi Wu
Lihai Nie
Zheli Liu
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"When Safety Detectors Aren't Enough: A Stealthy and Effective Jailbreak Attack on LLMs via Steganographic Techniques"
2 / 2 papers shown
Title
DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning
DeepSeek-AI
Daya Guo
Dejian Yang
Haowei Zhang
Junxiao Song
...
Shiyu Wang
S. Yu
Shunfeng Zhou
Shuting Pan
S.S. Li
ReLM
VLM
OffRL
AI4TS
LRM
384
2,022
0
22 Jan 2025
Large Language Models Are Involuntary Truth-Tellers: Exploiting Fallacy Failure for Jailbreak Attacks
Yue Zhou
Henry Peng Zou
Barbara Di Eugenio
Yang Zhang
LRM
HILM
135
6
0
01 Jul 2024
1