Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2309.13256
Cited By
Defending Pre-trained Language Models as Few-shot Learners against Backdoor Attacks
23 September 2023
Zhaohan Xi
Tianyu Du
Changjiang Li
Ren Pang
S. Ji
Jinghui Chen
Fenglong Ma
Ting Wang
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Defending Pre-trained Language Models as Few-shot Learners against Backdoor Attacks"
7 / 7 papers shown
Title
The Ultimate Cookbook for Invisible Poison: Crafting Subtle Clean-Label Text Backdoors with Style Attributes
Wencong You
Daniel Lowd
39
0
0
24 Apr 2025
PR-Attack: Coordinated Prompt-RAG Attacks on Retrieval-Augmented Generation in Large Language Models via Bilevel Optimization
Yang Jiao
Xiao Wang
Kai Yang
AAML
SILM
35
0
0
10 Apr 2025
CleanGen: Mitigating Backdoor Attacks for Generation Tasks in Large Language Models
Yuetai Li
Zhangchen Xu
Fengqing Jiang
Luyao Niu
D. Sahabandu
Bhaskar Ramasubramanian
Radha Poovendran
SILM
AAML
62
7
0
18 Jun 2024
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,858
0
18 Apr 2021
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
243
1,924
0
31 Dec 2020
Mitigating backdoor attacks in LSTM-based Text Classification Systems by Backdoor Keyword Identification
Chuanshuai Chen
Jiazhu Dai
SILM
55
125
0
11 Jul 2020
Language Models as Knowledge Bases?
Fabio Petroni
Tim Rocktaschel
Patrick Lewis
A. Bakhtin
Yuxiang Wu
Alexander H. Miller
Sebastian Riedel
KELM
AI4MH
428
2,588
0
03 Sep 2019
1