Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2204.05255
Cited By
Narcissus: A Practical Clean-Label Backdoor Attack with Limited Information
11 April 2022
Yi Zeng
Minzhou Pan
H. Just
Lingjuan Lyu
M. Qiu
R. Jia
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Narcissus: A Practical Clean-Label Backdoor Attack with Limited Information"
50 / 61 papers shown
Title
FFCBA: Feature-based Full-target Clean-label Backdoor Attacks
Yangxu Yin
H. Chen
Yudong Gao
Peng Sun
Liantao Wu
Zehan Li
Wei Liu
AAML
51
0
0
29 Apr 2025
TAPE: Tailored Posterior Difference for Auditing of Machine Unlearning
Weiqi Wang
Zhiyi Tian
An Liu
Shui Yu
79
0
0
27 Feb 2025
Multi-Target Federated Backdoor Attack Based on Feature Aggregation
Lingguag Hao
K. Hao
Bing Wei
Xue-song Tang
FedML
AAML
61
0
0
23 Feb 2025
LADDER: Multi-objective Backdoor Attack via Evolutionary Algorithm
Dazhuang Liu
Yanqi Qiao
Rui Wang
K. Liang
Georgios Smaragdakis
AAML
80
0
0
28 Nov 2024
Hide in Plain Sight: Clean-Label Backdoor for Auditing Membership Inference
Depeng Chen
Hao Chen
Hulin Jin
Jie Cui
Hong Zhong
79
0
0
24 Nov 2024
How to Defend Against Large-scale Model Poisoning Attacks in Federated Learning: A Vertical Solution
Jinbo Wang
Ruijin Wang
Fengli Zhang
FedML
AAML
29
0
0
16 Nov 2024
Backdoor Attack on Vertical Federated Graph Neural Network Learning
Jirui Yang
Peng Chen
Zhihui Lu
Ruijun Deng
Qiang Duan
Jianping Zeng
AAML
FedML
183
0
0
15 Oct 2024
Towards Reliable Verification of Unauthorized Data Usage in Personalized Text-to-Image Diffusion Models
Boheng Li
Yanhao Wei
Yankai Fu
Ziyi Wang
Yiming Li
Jie Zhang
Run Wang
Tianwei Zhang
DiffM
AAML
27
9
0
14 Oct 2024
Using Interleaved Ensemble Unlearning to Keep Backdoors at Bay for Finetuning Vision Transformers
Zeyu Michael Li
AAML
26
0
0
01 Oct 2024
Persistent Backdoor Attacks in Continual Learning
Zhen Guo
Abhinav Kumar
R. Tourani
AAML
35
3
0
20 Sep 2024
Protecting against simultaneous data poisoning attacks
Neel Alex
Shoaib Ahmed Siddiqui
Amartya Sanyal
David M. Krueger
AAML
54
1
0
23 Aug 2024
Compromising Embodied Agents with Contextual Backdoor Attacks
Aishan Liu
Yuguang Zhou
Xianglong Liu
Tianyuan Zhang
Siyuan Liang
...
Tianlin Li
Junqi Zhang
Wenbo Zhou
Qing Guo
Dacheng Tao
LLMAG
AAML
47
8
0
06 Aug 2024
Towards Clean-Label Backdoor Attacks in the Physical World
Thinh Dao
Cuong Chi Le
Khoa D. Doan
Kok-Seng Wong
AAML
34
1
0
27 Jul 2024
A Closer Look at GAN Priors: Exploiting Intermediate Features for Enhanced Model Inversion Attacks
Yixiang Qiu
Hao Fang
Hongyao Yu
Bin Chen
Meikang Qiu
Shu-Tao Xia
AAML
47
11
0
18 Jul 2024
UNIT: Backdoor Mitigation via Automated Neural Distribution Tightening
Shuyang Cheng
Guangyu Shen
Kaiyuan Zhang
Guanhong Tao
Shengwei An
Hanxi Guo
Shiqing Ma
Xiangyu Zhang
AAML
31
0
0
16 Jul 2024
Wicked Oddities: Selectively Poisoning for Effective Clean-Label Backdoor Attacks
Quang H. Nguyen
Nguyen Ngoc-Hieu
The-Anh Ta
Thanh Nguyen-Tang
Kok-Seng Wong
Hoang Thanh-Tung
Khoa D. Doan
AAML
33
2
0
15 Jul 2024
Generalization Bound and New Algorithm for Clean-Label Backdoor Attack
Lijia Yu
Shuang Liu
Yibo Miao
Xiao-Shan Gao
Lijun Zhang
AAML
36
5
0
02 Jun 2024
AI Risk Management Should Incorporate Both Safety and Security
Xiangyu Qi
Yangsibo Huang
Yi Zeng
Edoardo Debenedetti
Jonas Geiping
...
Chaowei Xiao
Bo-wen Li
Dawn Song
Peter Henderson
Prateek Mittal
AAML
51
11
0
29 May 2024
PureEBM: Universal Poison Purification via Mid-Run Dynamics of Energy-Based Models
Omead Brandon Pooladzandi
Jeffrey Q. Jiang
Sunay Bhat
Gregory Pottie
AAML
31
0
0
28 May 2024
PureGen: Universal Data Purification for Train-Time Poison Defense via Generative Model Dynamics
Sunay Bhat
Jeffrey Q. Jiang
Omead Brandon Pooladzandi
Alexander Branch
Gregory Pottie
AAML
44
2
0
28 May 2024
Invisible Backdoor Attack against Self-supervised Learning
Hanrong Zhang
Zhenting Wang
Tingxu Han
Mingyu Jin
Chenlu Zhan
Mengnan Du
Hongwei Wang
Shiqing Ma
Hongwei Wang
Shiqing Ma
AAML
SSL
49
2
0
23 May 2024
IBD-PSC: Input-level Backdoor Detection via Parameter-oriented Scaling Consistency
Linshan Hou
Ruili Feng
Zhongyun Hua
Wei Luo
Leo Yu Zhang
Yiming Li
AAML
46
19
0
16 May 2024
Backdoor Contrastive Learning via Bi-level Trigger Optimization
Weiyu Sun
Xinyu Zhang
Hao Lu
Ying-Cong Chen
Ting Wang
Jinghui Chen
Lu Lin
34
6
0
11 Apr 2024
Clean-image Backdoor Attacks
Dazhong Rong
Guoyao Yu
Shuheng Shen
Xinyi Fu
Peng Qian
Jianhai Chen
Qinming He
Xing Fu
Weiqiang Wang
41
4
0
22 Mar 2024
Low-Frequency Black-Box Backdoor Attack via Evolutionary Algorithm
Yanqi Qiao
Dazhuang Liu
Rui Wang
Kaitai Liang
AAML
28
1
0
23 Feb 2024
Mitigating Fine-tuning based Jailbreak Attack with Backdoor Enhanced Safety Alignment
Jiong Wang
Jiazhao Li
Yiquan Li
Xiangyu Qi
Junjie Hu
Yixuan Li
P. McDaniel
Muhao Chen
Bo Li
Chaowei Xiao
AAML
SILM
40
18
0
22 Feb 2024
Test-Time Backdoor Attacks on Multimodal Large Language Models
Dong Lu
Tianyu Pang
Chao Du
Qian Liu
Xianjun Yang
Min Lin
AAML
56
21
0
13 Feb 2024
FlowMur: A Stealthy and Practical Audio Backdoor Attack with Limited Knowledge
Jiahe Lan
Jie Wang
Baochen Yan
Zheng Yan
Elisa Bertino
AAML
32
10
0
15 Dec 2023
Stable Unlearnable Example: Enhancing the Robustness of Unlearnable Examples via Stable Error-Minimizing Noise
Yixin Liu
Kaidi Xu
Xun Chen
Lichao Sun
30
7
0
22 Nov 2023
Attention-Enhancing Backdoor Attacks Against BERT-based Models
Weimin Lyu
Songzhu Zheng
Lu Pang
Haibin Ling
Chao Chen
27
34
0
23 Oct 2023
Prompt Backdoors in Visual Prompt Learning
Hai Huang
Zhengyu Zhao
Michael Backes
Yun Shen
Yang Zhang
VLM
VPVLM
AAML
SILM
43
2
0
11 Oct 2023
Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!
Xiangyu Qi
Yi Zeng
Tinghao Xie
Pin-Yu Chen
Ruoxi Jia
Prateek Mittal
Peter Henderson
SILM
70
533
0
05 Oct 2023
MASTERKEY: Practical Backdoor Attack Against Speaker Verification Systems
Hanqing Guo
Xun Chen
Junfeng Guo
Li Xiao
Qiben Yan
18
11
0
13 Sep 2023
DFB: A Data-Free, Low-Budget, and High-Efficacy Clean-Label Backdoor Attack
Binhao Ma
Jiahui Wang
Dejun Wang
Bo Meng
AAML
33
0
0
18 Aug 2023
Beating Backdoor Attack at Its Own Game
Min Liu
Alberto L. Sangiovanni-Vincentelli
Xiangyu Yue
AAML
65
11
0
28 Jul 2023
Towards Stealthy Backdoor Attacks against Speech Recognition via Elements of Sound
Hanbo Cai
Pengcheng Zhang
Hai Dong
Yan Xiao
Stefanos Koffas
Yiming Li
AAML
29
28
0
17 Jul 2023
Boosting Backdoor Attack with A Learnable Poisoning Sample Selection Strategy
Zihao Zhu
Ruotong Wang
Shaokui Wei
Li Shen
Yanbo Fan
Baoyuan Wu
AAML
SILM
44
9
0
14 Jul 2023
On Practical Aspects of Aggregation Defenses against Data Poisoning Attacks
Wenxiao Wang
S. Feizi
AAML
32
1
0
28 Jun 2023
Efficient Backdoor Attacks for Deep Neural Networks in Real-world Scenarios
Ziqiang Li
Hong Sun
Pengfei Xia
Heng Li
Beihao Xia
Yi Wu
Bin Li
AAML
24
8
0
14 Jun 2023
A Proxy Attack-Free Strategy for Practically Improving the Poisoning Efficiency in Backdoor Attacks
Ziqiang Li
Hong Sun
Pengfei Xia
Beihao Xia
Xue Rui
Wei Zhang
Qinglang Guo
Bin Li
AAML
34
8
0
14 Jun 2023
Revisiting Data-Free Knowledge Distillation with Poisoned Teachers
Junyuan Hong
Yi Zeng
Shuyang Yu
Lingjuan Lyu
R. Jia
Jiayu Zhou
AAML
13
8
0
04 Jun 2023
LAVA: Data Valuation without Pre-Specified Learning Algorithms
H. Just
Feiyang Kang
Jiachen T. Wang
Yi Zeng
Myeongseob Ko
Ming Jin
R. Jia
27
54
0
28 Apr 2023
ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep Learning Paradigms
Minzhou Pan
Yi Zeng
Lingjuan Lyu
X. Lin
R. Jia
AAML
29
35
0
22 Feb 2023
Mithridates: Auditing and Boosting Backdoor Resistance of Machine Learning Pipelines
Eugene Bagdasaryan
Vitaly Shmatikov
AAML
24
2
0
09 Feb 2023
Temporal Robustness against Data Poisoning
Wenxiao Wang
S. Feizi
AAML
OOD
38
11
0
07 Feb 2023
Gradient Shaping: Enhancing Backdoor Attack Against Reverse Engineering
Rui Zhu
Di Tang
Siyuan Tang
Guanhong Tao
Shiqing Ma
Xiaofeng Wang
Haixu Tang
DD
23
3
0
29 Jan 2023
Silent Killer: A Stealthy, Clean-Label, Black-Box Backdoor Attack
Tzvi Lederer
Gallil Maimon
Lior Rokach
AAML
11
1
0
05 Jan 2023
Flareon: Stealthy any2any Backdoor Injection via Poisoned Augmentation
Tianrui Qin
Xianghuan He
Xitong Gao
Yiren Zhao
Kejiang Ye
Chengjie Xu
AAML
30
2
0
20 Dec 2022
The Perils of Learning From Unlabeled Data: Backdoor Attacks on Semi-supervised Learning
Virat Shejwalkar
Lingjuan Lyu
Amir Houmansadr
AAML
27
10
0
01 Nov 2022
Fine-mixing: Mitigating Backdoors in Fine-tuned Language Models
Zhiyuan Zhang
Lingjuan Lyu
Xingjun Ma
Chenguang Wang
Xu Sun
AAML
23
41
0
18 Oct 2022
1
2
Next