Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2302.04116
Cited By
Training-free Lexical Backdoor Attacks on Language Models
8 February 2023
Yujin Huang
Terry Yue Zhuo
Qiongkai Xu
Han Hu
Xingliang Yuan
Chunyang Chen
SILM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Training-free Lexical Backdoor Attacks on Language Models"
5 / 5 papers shown
Title
PR-Attack: Coordinated Prompt-RAG Attacks on Retrieval-Augmented Generation in Large Language Models via Bilevel Optimization
Yang Jiao
X. Wang
Kai Yang
AAML
SILM
33
0
0
10 Apr 2025
Tricking LLMs into Disobedience: Formalizing, Analyzing, and Detecting Jailbreaks
Abhinav Rao
S. Vashistha
Atharva Naik
Somak Aditya
Monojit Choudhury
35
17
0
24 May 2023
How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models
Phillip Rust
Jonas Pfeiffer
Ivan Vulić
Sebastian Ruder
Iryna Gurevych
80
235
0
31 Dec 2020
Pre-trained Models for Natural Language Processing: A Survey
Xipeng Qiu
Tianxiang Sun
Yige Xu
Yunfan Shao
Ning Dai
Xuanjing Huang
LM&MA
VLM
243
1,452
0
18 Mar 2020
Adversarial Training for Aspect-Based Sentiment Analysis with BERT
Akbar Karimi
L. Rossi
Andrea Prati
215
99
0
30 Jan 2020
1