ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2307.15043
  4. Cited By
Universal and Transferable Adversarial Attacks on Aligned Language
  Models
v1v2 (latest)

Universal and Transferable Adversarial Attacks on Aligned Language Models

27 July 2023
Andy Zou
Zifan Wang
Nicholas Carlini
Milad Nasr
J. Zico Kolter
Matt Fredrikson
ArXiv (abs)PDFHTMLGithub (3937★)

Papers citing "Universal and Transferable Adversarial Attacks on Aligned Language Models"

50 / 1,101 papers shown
Title
White-box Multimodal Jailbreaks Against Large Vision-Language Models
White-box Multimodal Jailbreaks Against Large Vision-Language Models
Ruofan Wang
Xingjun Ma
Hanxu Zhou
Chuanjun Ji
Guangnan Ye
Yu-Gang Jiang
AAMLVLM
84
24
0
28 May 2024
Improved Generation of Adversarial Examples Against Safety-aligned LLMs
Improved Generation of Adversarial Examples Against Safety-aligned LLMs
Qizhang Li
Yiwen Guo
Wangmeng Zuo
Hao Chen
AAMLSILM
85
7
0
28 May 2024
Personalized Steering of Large Language Models: Versatile Steering
  Vectors Through Bi-directional Preference Optimization
Personalized Steering of Large Language Models: Versatile Steering Vectors Through Bi-directional Preference Optimization
Yuanpu Cao
Tianrong Zhang
Bochuan Cao
Ziyi Yin
Lu Lin
Fenglong Ma
Jinghui Chen
LLMSV
92
33
0
28 May 2024
Learning diverse attacks on large language models for robust red-teaming and safety tuning
Learning diverse attacks on large language models for robust red-teaming and safety tuning
Seanie Lee
Minsu Kim
Lynn Cherif
David Dobre
Juho Lee
...
Kenji Kawaguchi
Gauthier Gidel
Yoshua Bengio
Nikolay Malkin
Moksh Jain
AAML
158
20
0
28 May 2024
Cross-Modal Safety Alignment: Is textual unlearning all you need?
Cross-Modal Safety Alignment: Is textual unlearning all you need?
Trishna Chakraborty
Erfan Shayegani
Zikui Cai
Nael B. Abu-Ghazaleh
M. Salman Asif
Yue Dong
Amit K. Roy-Chowdhury
Chengyu Song
85
17
0
27 May 2024
ReMoDetect: Reward Models Recognize Aligned LLM's Generations
ReMoDetect: Reward Models Recognize Aligned LLM's Generations
Hyunseok Lee
Jihoon Tack
Jinwoo Shin
DeLMO
61
1
0
27 May 2024
Navigating the Safety Landscape: Measuring Risks in Finetuning Large
  Language Models
Navigating the Safety Landscape: Measuring Risks in Finetuning Large Language Models
Sheng-Hsuan Peng
Pin-Yu Chen
Matthew Hull
Duen Horng Chau
102
30
0
27 May 2024
Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models
Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models
Chia-Yi Hsu
Yu-Lin Tsai
Chih-Hsun Lin
Pin-Yu Chen
Chia-Mu Yu
Chun-ying Huang
143
56
0
27 May 2024
The Uncanny Valley: Exploring Adversarial Robustness from a Flatness Perspective
The Uncanny Valley: Exploring Adversarial Robustness from a Flatness Perspective
Nils Philipp Walter
Linara Adilova
Jilles Vreeken
Michael Kamp
AAML
108
2
0
27 May 2024
Visual-RolePlay: Universal Jailbreak Attack on MultiModal Large Language
  Models via Role-playing Image Character
Visual-RolePlay: Universal Jailbreak Attack on MultiModal Large Language Models via Role-playing Image Character
Siyuan Ma
Weidi Luo
Yu Wang
Xiaogeng Liu
132
29
0
25 May 2024
No Two Devils Alike: Unveiling Distinct Mechanisms of Fine-tuning
  Attacks
No Two Devils Alike: Unveiling Distinct Mechanisms of Fine-tuning Attacks
Chak Tou Leong
Yi Cheng
Kaishuai Xu
Jian Wang
Hanlin Wang
Wenjie Li
AAML
138
24
0
25 May 2024
Efficient Adversarial Training in LLMs with Continuous Attacks
Efficient Adversarial Training in LLMs with Continuous Attacks
Sophie Xhonneux
Alessandro Sordoni
Stephan Günnemann
Gauthier Gidel
Leo Schwinn
AAML
145
56
0
24 May 2024
ART: Automatic Red-teaming for Text-to-Image Models to Protect Benign
  Users
ART: Automatic Red-teaming for Text-to-Image Models to Protect Benign Users
Guanlin Li
Kangjie Chen
Shudong Zhang
Jie Zhang
Tianwei Zhang
EGVM
97
14
0
24 May 2024
Robustifying Safety-Aligned Large Language Models through Clean Data
  Curation
Robustifying Safety-Aligned Large Language Models through Clean Data Curation
Xiaoqun Liu
Jiacheng Liang
Muchao Ye
Zhaohan Xi
AAML
123
23
0
24 May 2024
Extracting Prompts by Inverting LLM Outputs
Extracting Prompts by Inverting LLM Outputs
Collin Zhang
John X. Morris
Vitaly Shmatikov
71
22
0
23 May 2024
ALI-Agent: Assessing LLMs' Alignment with Human Values via Agent-based
  Evaluation
ALI-Agent: Assessing LLMs' Alignment with Human Values via Agent-based Evaluation
Jingnan Zheng
Han Wang
An Zhang
Tai D. Nguyen
Jun Sun
Tat-Seng Chua
LLMAG
99
23
0
23 May 2024
Efficient Universal Goal Hijacking with Semantics-guided Prompt Organization
Efficient Universal Goal Hijacking with Semantics-guided Prompt Organization
Yihao Huang
Chong Wang
Xiaojun Jia
Qing Guo
Felix Juefei Xu
Jian Zhang
G. Pu
Yang Liu
109
9
0
23 May 2024
WordGame: Efficient & Effective LLM Jailbreak via Simultaneous
  Obfuscation in Query and Response
WordGame: Efficient & Effective LLM Jailbreak via Simultaneous Obfuscation in Query and Response
Tianrong Zhang
Bochuan Cao
Yuanpu Cao
Lu Lin
Prasenjit Mitra
Jinghui Chen
AAML
106
12
0
22 May 2024
Safety Alignment for Vision Language Models
Safety Alignment for Vision Language Models
Zhendong Liu
Yuanbi Nie
Yingshui Tan
Xiangyu Yue
Qiushi Cui
Chongjun Wang
Xiaoyong Zhu
Bo Zheng
VLMMLLM
117
12
0
22 May 2024
How to Trace Latent Generative Model Generated Images without Artificial
  Watermark?
How to Trace Latent Generative Model Generated Images without Artificial Watermark?
Zhenting Wang
Vikash Sehwag
Chen Chen
Lingjuan Lyu
Dimitris N. Metaxas
Shiqing Ma
WIGM
82
9
0
22 May 2024
Model Editing as a Robust and Denoised variant of DPO: A Case Study on Toxicity
Model Editing as a Robust and Denoised variant of DPO: A Case Study on Toxicity
Rheeya Uppaal
Apratim De
Yiting He
Yiquao Zhong
Junjie Hu
158
10
0
22 May 2024
Securing the Future of GenAI: Policy and Technology
Securing the Future of GenAI: Policy and Technology
Mihai Christodorescu
Craven
Soheil Feizi
Neil Zhenqiang Gong
Mia Hoffmann
...
Jessica Newman
Emelia Probasco
Yanjun Qi
Khawaja Shams
Turek
SILM
99
6
0
21 May 2024
GPT-4 Jailbreaks Itself with Near-Perfect Success Using Self-Explanation
GPT-4 Jailbreaks Itself with Near-Perfect Success Using Self-Explanation
Govind Ramesh
Yao Dou
Wei Xu
PILM
111
17
0
21 May 2024
Lockpicking LLMs: A Logit-Based Jailbreak Using Token-level Manipulation
Lockpicking LLMs: A Logit-Based Jailbreak Using Token-level Manipulation
Yuxi Li
Yi Liu
Yuekang Li
Ling Shi
Gelei Deng
Shengquan Chen
Kailong Wang
119
12
0
20 May 2024
Safeguarding Vision-Language Models Against Patched Visual Prompt
  Injectors
Safeguarding Vision-Language Models Against Patched Visual Prompt Injectors
Jiachen Sun
Changsheng Wang
Jiong Wang
Yiwei Zhang
Chaowei Xiao
AAMLVLM
88
4
0
17 May 2024
What is it for a Machine Learning Model to Have a Capability?
What is it for a Machine Learning Model to Have a Capability?
Jacqueline Harding
Nathaniel Sharadin
ELM
83
3
0
14 May 2024
Risks and Opportunities of Open-Source Generative AI
Risks and Opportunities of Open-Source Generative AI
Francisco Eiras
Aleksander Petrov
Bertie Vidgen
Christian Schroeder
Fabio Pizzati
...
Matthew Jackson
Phillip H. S. Torr
Trevor Darrell
Y. Lee
Jakob N. Foerster
96
19
0
14 May 2024
SpeechGuard: Exploring the Adversarial Robustness of Multimodal Large
  Language Models
SpeechGuard: Exploring the Adversarial Robustness of Multimodal Large Language Models
Raghuveer Peri
Sai Muralidhar Jayanthi
S. Ronanki
Anshu Bhatia
Karel Mundnich
...
Srikanth Vishnubhotla
Daniel Garcia-Romero
S. Srinivasan
Kyu J. Han
Katrin Kirchhoff
AAML
80
3
0
14 May 2024
PARDEN, Can You Repeat That? Defending against Jailbreaks via Repetition
PARDEN, Can You Repeat That? Defending against Jailbreaks via Repetition
Ziyang Zhang
Qizhen Zhang
Jakob N. Foerster
AAML
102
22
0
13 May 2024
LLM-Generated Black-box Explanations Can Be Adversarially Helpful
LLM-Generated Black-box Explanations Can Be Adversarially Helpful
R. Ajwani
Shashidhar Reddy Javaji
Frank Rudzicz
Zining Zhu
AAML
80
8
0
10 May 2024
PLeak: Prompt Leaking Attacks against Large Language Model Applications
PLeak: Prompt Leaking Attacks against Large Language Model Applications
Bo Hui
Haolin Yuan
Neil Zhenqiang Gong
Philippe Burlina
Yinzhi Cao
AAMLLLMAGSILM
147
45
0
10 May 2024
Revisiting character-level adversarial attacks
Revisiting character-level adversarial attacks
Elias Abad Rocamora
Yongtao Wu
Fanghui Liu
Grigorios G. Chrysos
Volkan Cevher
AAML
96
4
0
07 May 2024
Beyond human subjectivity and error: a novel AI grading system
Beyond human subjectivity and error: a novel AI grading system
Alexandra Gobrecht
Felix Tuma
Moritz Möller
Thomas Zöller
Mark Zakhvatkin
Alexandra Wuttig
Holger Sommerfeldt
Sven Schütt
33
5
0
07 May 2024
A Causal Explainable Guardrails for Large Language Models
A Causal Explainable Guardrails for Large Language Models
Zhixuan Chu
Yan Wang
Longfei Li
Peng Kuang
Zhan Qin
Kui Ren
LLMSV
97
9
0
07 May 2024
Can LLMs Deeply Detect Complex Malicious Queries? A Framework for
  Jailbreaking via Obfuscating Intent
Can LLMs Deeply Detect Complex Malicious Queries? A Framework for Jailbreaking via Obfuscating Intent
Shang Shang
Xinqiang Zhao
Zhongjiang Yao
Yepeng Yao
Liya Su
Zijing Fan
Xiaodan Zhang
Zhengwei Jiang
113
6
0
06 May 2024
When LLMs Meet Cybersecurity: A Systematic Literature Review
When LLMs Meet Cybersecurity: A Systematic Literature Review
Jie Zhang
Haoyu Bu
Hui Wen
Yu Chen
Lun Li
Hongsong Zhu
143
47
0
06 May 2024
PICLe: Eliciting Diverse Behaviors from Large Language Models with
  Persona In-Context Learning
PICLe: Eliciting Diverse Behaviors from Large Language Models with Persona In-Context Learning
Hyeong Kyu Choi
Yixuan Li
123
19
0
03 May 2024
ProFLingo: A Fingerprinting-based Intellectual Property Protection
  Scheme for Large Language Models
ProFLingo: A Fingerprinting-based Intellectual Property Protection Scheme for Large Language Models
Heng Jin
Chaoyu Zhang
Shanghao Shi
W. Lou
Y. T. Hou
54
3
0
03 May 2024
Generative AI in Cybersecurity
Generative AI in Cybersecurity
Shivani Metta
Isaac Chang
Jack Parker
Michael P. Roman
Arturo F. Ehuan
70
5
0
02 May 2024
Mixture of insighTful Experts (MoTE): The Synergy of Thought Chains and Expert Mixtures in Self-Alignment
Mixture of insighTful Experts (MoTE): The Synergy of Thought Chains and Expert Mixtures in Self-Alignment
Zhili Liu
Yunhao Gou
Kai Chen
Lanqing Hong
Jiahui Gao
...
Yu Zhang
Zhenguo Li
Xin Jiang
Qiang Liu
James T. Kwok
MoE
243
10
0
01 May 2024
Evaluating and Mitigating Linguistic Discrimination in Large Language
  Models
Evaluating and Mitigating Linguistic Discrimination in Large Language Models
Guoliang Dong
Haoyu Wang
Jun Sun
Xinyu Wang
82
4
0
29 Apr 2024
Towards Incremental Learning in Large Language Models: A Critical Review
Towards Incremental Learning in Large Language Models: A Critical Review
M. Jovanovic
Peter Voss
ELMCLLKELM
116
5
0
28 Apr 2024
Exploring the Robustness of In-Context Learning with Noisy Labels
Exploring the Robustness of In-Context Learning with Noisy Labels
Chen Cheng
Xinzhi Yu
Haodong Wen
Jinsong Sun
Guanzhang Yue
Yihao Zhang
Zeming Wei
NoLa
61
8
0
28 Apr 2024
Probabilistic Inference in Language Models via Twisted Sequential Monte
  Carlo
Probabilistic Inference in Language Models via Twisted Sequential Monte Carlo
Stephen Zhao
Rob Brekelmans
Alireza Makhzani
Roger C. Grosse
89
41
0
26 Apr 2024
Human-Imperceptible Retrieval Poisoning Attacks in LLM-Powered
  Applications
Human-Imperceptible Retrieval Poisoning Attacks in LLM-Powered Applications
Quan Zhang
Binqi Zeng
Chijin Zhou
Gwihwan Go
Heyuan Shi
Yu Jiang
SILMAAML
87
24
0
26 Apr 2024
Talking Nonsense: Probing Large Language Models' Understanding of
  Adversarial Gibberish Inputs
Talking Nonsense: Probing Large Language Models' Understanding of Adversarial Gibberish Inputs
Valeriia Cherepanova
James Zou
AAML
102
6
0
26 Apr 2024
Near to Mid-term Risks and Opportunities of Open-Source Generative AI
Near to Mid-term Risks and Opportunities of Open-Source Generative AI
Francisco Eiras
Aleksandar Petrov
Bertie Vidgen
Christian Schroeder de Witt
Fabio Pizzati
...
Paul Röttger
Philip Torr
Trevor Darrell
Y. Lee
Jakob N. Foerster
115
8
0
25 Apr 2024
VISLA Benchmark: Evaluating Embedding Sensitivity to Semantic and
  Lexical Alterations
VISLA Benchmark: Evaluating Embedding Sensitivity to Semantic and Lexical Alterations
Sri Harsha Dumpala
Aman Jaiswal
Chandramouli Shama Sastry
E. Milios
Sageev Oore
Hassan Sajjad
VLMCoGe
106
0
0
25 Apr 2024
Don't Say No: Jailbreaking LLM by Suppressing Refusal
Don't Say No: Jailbreaking LLM by Suppressing Refusal
Yukai Zhou
Jian Lou
Zhijie Huang
Zhan Qin
Yibei Yang
Wenjie Wang
AAML
116
19
0
25 Apr 2024
XC-Cache: Cross-Attending to Cached Context for Efficient LLM Inference
XC-Cache: Cross-Attending to Cached Context for Efficient LLM Inference
Jo˜ao Monteiro
Étienne Marcotte
Pierre-Andre Noel
Valentina Zantedeschi
David Vázquez
Nicolas Chapados
Christopher Pal
Perouz Taslakian
77
5
0
23 Apr 2024
Previous
123...141516...212223
Next