Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2407.13796
Cited By
Continuous Embedding Attacks via Clipped Inputs in Jailbreaking Large Language Models
16 July 2024
Zihao Xu
Yi Liu
Gelei Deng
Kailong Wang
Yuekang Li
Ling Shi
S. Picek
KELM
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Continuous Embedding Attacks via Clipped Inputs in Jailbreaking Large Language Models"
5 / 5 papers shown
Title
Soft Prompt Threats: Attacking Safety Alignment and Unlearning in Open-Source LLMs through the Embedding Space
Leo Schwinn
David Dobre
Sophie Xhonneux
Gauthier Gidel
Stephan Gunnemann
AAML
132
47
0
14 Feb 2024
Survey of Vulnerabilities in Large Language Models Revealed by Adversarial Attacks
Erfan Shayegani
Md Abdullah Al Mamun
Yu Fu
Pedram Zaree
Yue Dong
Nael B. Abu-Ghazaleh
AAML
223
163
0
16 Oct 2023
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
886
13,176
0
04 Mar 2022
PyTorch: An Imperative Style, High-Performance Deep Learning Library
Adam Paszke
Sam Gross
Francisco Massa
Adam Lerer
James Bradbury
...
Sasank Chilamkurthy
Benoit Steiner
Lu Fang
Junjie Bai
Soumith Chintala
ODL
544
42,591
0
03 Dec 2019
Explaining and Harnessing Adversarial Examples
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
AAML
GAN
282
19,121
0
20 Dec 2014
1