Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2402.12026
Cited By
Acquiring Clean Language Models from Backdoor Poisoned Datasets by Downscaling Frequency Space
19 February 2024
Zongru Wu
Zhuosheng Zhang
Pengzhou Cheng
Gongshen Liu
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Acquiring Clean Language Models from Backdoor Poisoned Datasets by Downscaling Frequency Space"
8 / 8 papers shown
Title
NLP Security and Ethics, in the Wild
Heather Lent
Erick Galinkin
Yiyi Chen
Jens Myrup Pedersen
Leon Derczynski
Johannes Bjerva
SILM
47
0
0
09 Apr 2025
Gracefully Filtering Backdoor Samples for Generative Large Language Models without Retraining
Zongru Wu
Pengzhou Cheng
Lingyong Fang
Zhuosheng Zhang
Gongshen Liu
AAML
SILM
85
0
0
03 Dec 2024
Here's a Free Lunch: Sanitizing Backdoored Models with Model Merge
Ansh Arora
Xuanli He
Maximilian Mozes
Srinibas Swain
Mark Dras
Qiongkai Xu
SILM
MoMe
AAML
58
12
0
29 Feb 2024
Backdoor Attacks and Countermeasures in Natural Language Processing Models: A Comprehensive Security Review
Pengzhou Cheng
Zongru Wu
Wei Du
Haodong Zhao
Wei Lu
Gongshen Liu
SILM
AAML
37
17
0
12 Sep 2023
NCL: Textual Backdoor Defense Using Noise-augmented Contrastive Learning
Shengfang Zhai
Qingni Shen
Xiaoyi Chen
Weilong Wang
Cong Li
Yuejian Fang
Zhonghai Wu
AAML
45
8
0
03 Mar 2023
Mind the Style of Text! Adversarial and Backdoor Attacks Based on Text Style Transfer
Fanchao Qi
Yangyi Chen
Xurui Zhang
Mukai Li
Zhiyuan Liu
Maosong Sun
AAML
SILM
82
175
0
14 Oct 2021
Multi-scale Deep Neural Network (MscaleDNN) for Solving Poisson-Boltzmann Equation in Complex Domains
Ziqi Liu
Wei Cai
Zhi-Qin John Xu
AI4CE
264
122
0
22 Jul 2020
Mitigating backdoor attacks in LSTM-based Text Classification Systems by Backdoor Keyword Identification
Chuanshuai Chen
Jiazhu Dai
SILM
55
125
0
11 Jul 2020
1