Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2505.19610
Cited By
v1
v2 (latest)
JailBound: Jailbreaking Internal Safety Boundaries of Vision-Language Models
26 May 2025
Jiaxin Song
Yixu Wang
Jie Li
Rui Yu
Yan Teng
Xingjun Ma
Yingchun Wang
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"JailBound: Jailbreaking Internal Safety Boundaries of Vision-Language Models"
3 / 3 papers shown
Title
Qwen2.5-VL Technical Report
S. Bai
Keqin Chen
Xuejing Liu
Jialin Wang
Wenbin Ge
...
Zesen Cheng
Hang Zhang
Zhibo Yang
Haiyang Xu
Junyang Lin
VLM
435
699
0
20 Feb 2025
Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking Multimodal Large Language Models
Yifan Li
Hangyu Guo
Kun Zhou
Wayne Xin Zhao
Ji-Rong Wen
130
56
0
14 Mar 2024
FigStep: Jailbreaking Large Vision-Language Models via Typographic Visual Prompts
Yichen Gong
Delong Ran
Jinyuan Liu
Conglei Wang
Tianshuo Cong
Anyu Wang
Sisi Duan
Xiaoyun Wang
MLLM
235
160
0
09 Nov 2023
1