ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.12082
  4. Cited By
SneakyPrompt: Jailbreaking Text-to-image Generative Models

SneakyPrompt: Jailbreaking Text-to-image Generative Models

20 May 2023
Yuchen Yang
Bo Hui
Haolin Yuan
Neil Gong
Yinzhi Cao
    EGVM
ArXivPDFHTML

Papers citing "SneakyPrompt: Jailbreaking Text-to-image Generative Models"

50 / 54 papers shown
Title
TokenProber: Jailbreaking Text-to-image Models via Fine-grained Word Impact Analysis
TokenProber: Jailbreaking Text-to-image Models via Fine-grained Word Impact Analysis
Longtian Wang
Xiaofei Xie
Tianlin Li
Yuhan Zhi
Chao Shen
19
0
0
11 May 2025
Jailbreaking the Text-to-Video Generative Models
Jailbreaking the Text-to-Video Generative Models
Jiayang Liu
Siyuan Liang
Shiqian Zhao
Rongcheng Tu
Wenbo Zhou
Xiaochun Cao
D. Tao
Siew Kei Lam
EGVM
VGen
49
0
0
10 May 2025
REVEAL: Multi-turn Evaluation of Image-Input Harms for Vision LLM
REVEAL: Multi-turn Evaluation of Image-Input Harms for Vision LLM
Madhur Jindal
Saurabh Deshpande
AAML
45
0
0
07 May 2025
Adversarial Attacks in Multimodal Systems: A Practitioner's Survey
Adversarial Attacks in Multimodal Systems: A Practitioner's Survey
Shashank Kapoor
Sanjay Surendranath Girija
Lakshit Arora
Dipen Pradhan
Ankit Shetgaonkar
Aman Raj
AAML
74
0
0
06 May 2025
The Dual Power of Interpretable Token Embeddings: Jailbreaking Attacks and Defenses for Diffusion Model Unlearning
The Dual Power of Interpretable Token Embeddings: Jailbreaking Attacks and Defenses for Diffusion Model Unlearning
Siyi Chen
Yimeng Zhang
Sijia Liu
Q. Qu
AAML
147
0
0
30 Apr 2025
BadMoE: Backdooring Mixture-of-Experts LLMs via Optimizing Routing Triggers and Infecting Dormant Experts
BadMoE: Backdooring Mixture-of-Experts LLMs via Optimizing Routing Triggers and Infecting Dormant Experts
Qingyue Wang
Qi Pang
Xixun Lin
Shuai Wang
Daoyuan Wu
MoE
59
0
0
24 Apr 2025
T2VShield: Model-Agnostic Jailbreak Defense for Text-to-Video Models
T2VShield: Model-Agnostic Jailbreak Defense for Text-to-Video Models
Siyuan Liang
Jiayang Liu
Jiecheng Zhai
Tianmeng Fang
Rongcheng Tu
A. Liu
Xiaochun Cao
Dacheng Tao
VGen
61
0
0
22 Apr 2025
Towards NSFW-Free Text-to-Image Generation via Safety-Constraint Direct Preference Optimization
Towards NSFW-Free Text-to-Image Generation via Safety-Constraint Direct Preference Optimization
Shouwei Ruan
Zhenyu Wu
Yao Huang
Ruochen Zhang
Yitong Sun
Caixin Kang
Xingxing Wei
EGVM
40
0
0
19 Apr 2025
Token-Level Constraint Boundary Search for Jailbreaking Text-to-Image Models
Token-Level Constraint Boundary Search for Jailbreaking Text-to-Image Models
Jiaheng Liu
Zhaoxin Wang
Handing Wang
Cong Tian
Yaochu Jin
26
0
0
15 Apr 2025
Sparse Autoencoder as a Zero-Shot Classifier for Concept Erasing in Text-to-Image Diffusion Models
Sparse Autoencoder as a Zero-Shot Classifier for Concept Erasing in Text-to-Image Diffusion Models
Zhihua Tian
Sirun Nan
Ming Xu
Shengfang Zhai
Wenjie Qu
Jian Liu
Kui Ren
Ruoxi Jia
Jiaheng Zhang
DiffM
96
1
0
12 Mar 2025
ToxicSQL: Migrating SQL Injection Threats into Text-to-SQL Models via Backdoor Attack
ToxicSQL: Migrating SQL Injection Threats into Text-to-SQL Models via Backdoor Attack
Meiyu Lin
Haichuan Zhang
Jiale Lao
Renyuan Li
Yuanchun Zhou
Carl Yang
Yang Cao
Mingjie Tang
SILM
64
0
0
07 Mar 2025
Jailbreaking Safeguarded Text-to-Image Models via Large Language Models
Zhengyuan Jiang
Yuepeng Hu
Yuqing Yang
Yinzhi Cao
Neil Gong
69
0
0
03 Mar 2025
SafeText: Safe Text-to-image Models via Aligning the Text Encoder
SafeText: Safe Text-to-image Models via Aligning the Text Encoder
Yuepeng Hu
Zhengyuan Jiang
Neil Zhenqiang Gong
63
1
0
28 Feb 2025
Unified Prompt Attack Against Text-to-Image Generation Models
Unified Prompt Attack Against Text-to-Image Generation Models
Duo Peng
Qiuhong Ke
Mark He Huang
Ping Hu
Xiaozhong Liu
50
0
0
23 Feb 2025
A Systematic Review of Open Datasets Used in Text-to-Image (T2I) Gen AI Model Safety
Rakeen Rouf
Trupti Bavalatti
Osama Ahmed
Dhaval Potdar
Faraz Jawed
EGVM
64
1
0
23 Feb 2025
T2ISafety: Benchmark for Assessing Fairness, Toxicity, and Privacy in Image Generation
T2ISafety: Benchmark for Assessing Fairness, Toxicity, and Privacy in Image Generation
Lijun Li
Zhelun Shi
Xuhao Hu
Bowen Dong
Yiran Qin
Xihui Liu
Lu Sheng
Jing Shao
114
1
0
21 Feb 2025
DiffGuard: Text-Based Safety Checker for Diffusion Models
DiffGuard: Text-Based Safety Checker for Diffusion Models
Massine El Khader
Elias Al Bouzidi
Abdellah Oumida
Mohammed Sbaihi
Eliott Binard
Jean-Philippe Poli
Wassila Ouerdane
Boussad Addad
Katarzyna Kapusta
DiffM
114
0
0
20 Feb 2025
When LLM Meets DRL: Advancing Jailbreaking Efficiency via DRL-guided Search
When LLM Meets DRL: Advancing Jailbreaking Efficiency via DRL-guided Search
Xuan Chen
Yuzhou Nie
Wenbo Guo
Xiangyu Zhang
112
10
0
28 Jan 2025
Direct Unlearning Optimization for Robust and Safe Text-to-Image Models
Direct Unlearning Optimization for Robust and Safe Text-to-Image Models
Yong-Hyun Park
Sangdoo Yun
Jin-Hwa Kim
Junho Kim
Geonhui Jang
Yonghyun Jeong
Junghyo Jo
Gayoung Lee
76
13
0
17 Jan 2025
Towards Action Hijacking of Large Language Model-based Agent
Towards Action Hijacking of Large Language Model-based Agent
Yuyang Zhang
Kangjie Chen
Xudong Jiang
Yuxiang Sun
Run Wang
Lina Wang
LLMAG
AAML
73
2
0
14 Dec 2024
SafetyDPO: Scalable Safety Alignment for Text-to-Image Generation
SafetyDPO: Scalable Safety Alignment for Text-to-Image Generation
Runtao Liu
Chen I Chieh
Jindong Gu
Jipeng Zhang
Renjie Pi
Qifeng Chen
Philip H. S. Torr
Ashkan Khakzar
Fabio Pizzati
EGVM
109
0
0
13 Dec 2024
Buster: Implanting Semantic Backdoor into Text Encoder to Mitigate NSFW Content Generation
Buster: Implanting Semantic Backdoor into Text Encoder to Mitigate NSFW Content Generation
Xin Zhao
Xiaojun Chen
Yuexin Xuan
Zhendong Zhao
Xiaojun Jia
Xinfeng Li
Xiaofeng Wang
75
0
0
10 Dec 2024
Safeguarding Text-to-Image Generation via Inference-Time Prompt-Noise
  Optimization
Safeguarding Text-to-Image Generation via Inference-Time Prompt-Noise Optimization
Jiangweizhi Peng
Zhiwei Tang
Gaowen Liu
Charles Fleming
Mingyi Hong
79
2
0
05 Dec 2024
The Dark Side of Trust: Authority Citation-Driven Jailbreak Attacks on Large Language Models
Xikang Yang
Xuehai Tang
Jizhong Han
Songlin Hu
68
0
0
18 Nov 2024
Variational Bayesian Bow tie Neural Networks with Shrinkage
Alisa Sheinkman
Sara Wade
BDL
UQCV
45
0
0
17 Nov 2024
Jailbreak Attacks and Defenses against Multimodal Generative Models: A
  Survey
Jailbreak Attacks and Defenses against Multimodal Generative Models: A Survey
Xuannan Liu
Xing Cui
Peipei Li
Zekun Li
Huaibo Huang
Shuhan Xia
Miaoxuan Zhang
Yueying Zou
Ran He
AAML
67
8
0
14 Nov 2024
Artificial Intelligence for Biomedical Video Generation
Artificial Intelligence for Biomedical Video Generation
Linyuan Li
Jianing Qiu
Anujit Saha
Lin Li
Poyuan Li
Mengxian He
Ziyu Guo
Wu Yuan
VGen
63
1
0
12 Nov 2024
Pseudo-Probability Unlearning: Towards Efficient and Privacy-Preserving
  Machine Unlearning
Pseudo-Probability Unlearning: Towards Efficient and Privacy-Preserving Machine Unlearning
Zihao Zhao
Yijiang Li
Yuqing Yang
Wenqing Zhang
Nuno Vasconcelos
Yinzhi Cao
MU
28
1
0
04 Nov 2024
AdvI2I: Adversarial Image Attack on Image-to-Image Diffusion models
AdvI2I: Adversarial Image Attack on Image-to-Image Diffusion models
Yaopei Zeng
Yuanpu Cao
Bochuan Cao
Yurui Chang
Jinghui Chen
Lu Lin
DiffM
36
3
0
28 Oct 2024
Erasing Undesirable Concepts in Diffusion Models with Adversarial
  Preservation
Erasing Undesirable Concepts in Diffusion Models with Adversarial Preservation
Anh-Vu Bui
L. Vuong
Khanh Doan
Trung Le
Paul Montague
Tamas Abraham
Dinh Q. Phung
KELM
DiffM
32
9
0
21 Oct 2024
Unstable Unlearning: The Hidden Risk of Concept Resurgence in Diffusion Models
Unstable Unlearning: The Hidden Risk of Concept Resurgence in Diffusion Models
Vinith M. Suriyakumar
Rohan Alur
Ayush Sekhari
Manish Raghavan
Ashia C. Wilson
55
2
0
10 Oct 2024
Holistic Unlearning Benchmark: A Multi-Faceted Evaluation for Text-to-Image Diffusion Model Unlearning
Holistic Unlearning Benchmark: A Multi-Faceted Evaluation for Text-to-Image Diffusion Model Unlearning
Saemi Moon
M. Lee
Sangdon Park
Dongwoo Kim
41
1
0
08 Oct 2024
Attention Shift: Steering AI Away from Unsafe Content
Attention Shift: Steering AI Away from Unsafe Content
Shivank Garg
Manyana Tiwari
31
0
0
06 Oct 2024
ShieldDiff: Suppressing Sexual Content Generation from Diffusion Models
  through Reinforcement Learning
ShieldDiff: Suppressing Sexual Content Generation from Diffusion Models through Reinforcement Learning
Dong Han
Salaheldin Mohamed
Yong Li
23
2
0
04 Oct 2024
Chain-of-Jailbreak Attack for Image Generation Models via Editing Step
  by Step
Chain-of-Jailbreak Attack for Image Generation Models via Editing Step by Step
Wenxuan Wang
Kuiyi Gao
Zihan Jia
Youliang Yuan
Jen-tse Huang
Qiuzhi Liu
Shuai Wang
Wenxiang Jiao
Zhaopeng Tu
129
2
0
04 Oct 2024
SteerDiff: Steering towards Safe Text-to-Image Diffusion Models
SteerDiff: Steering towards Safe Text-to-Image Diffusion Models
Hongxiang Zhang
Yifeng He
Hao Chen
28
2
0
03 Oct 2024
Code Vulnerability Repair with Large Language Model using Context-Aware
  Prompt Tuning
Code Vulnerability Repair with Large Language Model using Context-Aware Prompt Tuning
Arshiya Khan
Guannan Liu
Xing Gao
KELM
31
1
0
27 Sep 2024
Adversarial Attacks on Parts of Speech: An Empirical Study in
  Text-to-Image Generation
Adversarial Attacks on Parts of Speech: An Empirical Study in Text-to-Image Generation
G M Shahariar
Jia Chen
Jiachen Li
Yue Dong
34
0
0
21 Sep 2024
Unleashing Worms and Extracting Data: Escalating the Outcome of Attacks
  against RAG-based Inference in Scale and Severity Using Jailbreaking
Unleashing Worms and Extracting Data: Escalating the Outcome of Attacks against RAG-based Inference in Scale and Severity Using Jailbreaking
Stav Cohen
Ron Bitton
Ben Nassi
44
4
0
12 Sep 2024
Perception-guided Jailbreak against Text-to-Image Models
Perception-guided Jailbreak against Text-to-Image Models
Yihao Huang
Le Liang
Tianlin Li
Xiaojun Jia
Run Wang
Weikai Miao
G. Pu
Yang Liu
41
7
0
20 Aug 2024
DiffZOO: A Purely Query-Based Black-Box Attack for Red-teaming Text-to-Image Generative Model via Zeroth Order Optimization
DiffZOO: A Purely Query-Based Black-Box Attack for Red-teaming Text-to-Image Generative Model via Zeroth Order Optimization
Pucheng Dang
Xing Hu
Dong Li
Rui Zhang
Qi Guo
Kaidi Xu
DiffM
36
5
0
18 Aug 2024
Attacks and Defenses for Generative Diffusion Models: A Comprehensive
  Survey
Attacks and Defenses for Generative Diffusion Models: A Comprehensive Survey
V. T. Truong
Luan Ba Dang
Long Bao Le
DiffM
MedIm
53
16
0
06 Aug 2024
Jailbreaking Text-to-Image Models with LLM-Based Agents
Jailbreaking Text-to-Image Models with LLM-Based Agents
Yingkai Dong
Zheng Li
Xiangtao Meng
Ning Yu
Shanqing Guo
LLMAG
45
13
0
01 Aug 2024
Operationalizing a Threat Model for Red-Teaming Large Language Models
  (LLMs)
Operationalizing a Threat Model for Red-Teaming Large Language Models (LLMs)
Apurv Verma
Satyapriya Krishna
Sebastian Gehrmann
Madhavan Seshadri
Anu Pradhan
Tom Ault
Leslie Barrett
David Rabinowitz
John Doucette
Nhathai Phan
54
10
0
20 Jul 2024
RL-JACK: Reinforcement Learning-powered Black-box Jailbreaking Attack
  against LLMs
RL-JACK: Reinforcement Learning-powered Black-box Jailbreaking Attack against LLMs
Xuan Chen
Yuzhou Nie
Lu Yan
Yunshu Mao
Wenbo Guo
Xiangyu Zhang
30
7
0
13 Jun 2024
Unique Security and Privacy Threats of Large Language Model: A
  Comprehensive Survey
Unique Security and Privacy Threats of Large Language Model: A Comprehensive Survey
Shang Wang
Tianqing Zhu
Bo Liu
Ming Ding
Xu Guo
Dayong Ye
Wanlei Zhou
Philip S. Yu
PILM
67
17
0
12 Jun 2024
Espresso: Robust Concept Filtering in Text-to-Image Models
Espresso: Robust Concept Filtering in Text-to-Image Models
Anudeep Das
Vasisht Duddu
Rui Zhang
Nadarajah Asokan
EGVM
31
6
0
30 Apr 2024
Latent Guard: a Safety Framework for Text-to-image Generation
Latent Guard: a Safety Framework for Text-to-image Generation
Runtao Liu
Ashkan Khakzar
Jindong Gu
Qifeng Chen
Philip H. S. Torr
Fabio Pizzati
28
24
0
11 Apr 2024
Jailbreaking Prompt Attack: A Controllable Adversarial Attack against
  Diffusion Models
Jailbreaking Prompt Attack: A Controllable Adversarial Attack against Diffusion Models
Jiachen Ma
Anda Cao
Zhiqing Xiao
Jie Zhang
Chaonan Ye
Junbo Zhao
24
29
0
02 Apr 2024
Hiding and Recovering Knowledge in Text-to-Image Diffusion Models via Learnable Prompts
Hiding and Recovering Knowledge in Text-to-Image Diffusion Models via Learnable Prompts
Anh-Vu Bui
Khanh Doan
Trung Le
Paul Montague
Tamas Abraham
Dinh Q. Phung
28
0
0
18 Mar 2024
12
Next