Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2312.11057
Cited By
DataElixir: Purifying Poisoned Dataset to Mitigate Backdoor Attacks via Diffusion Models
18 December 2023
Jiachen Zhou
Peizhuo Lv
Yibing Lan
Guozhu Meng
Kai Chen
Hualong Ma
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"DataElixir: Purifying Poisoned Dataset to Mitigate Backdoor Attacks via Diffusion Models"
5 / 5 papers shown
Title
Backdoor Defense in Diffusion Models via Spatial Attention Unlearning
Abha Jha
Ashwath Vaithinathan Aravindan
Matthew Salaway
Atharva Sandeep Bhide
Duygu Nur Yaldiz
AAML
86
0
0
21 Apr 2025
REFINE: Inversion-Free Backdoor Defense via Model Reprogramming
Yuxiao Chen
Shuo Shao
Enhao Huang
Yiming Li
Pin-Yu Chen
Zhan Qin
Kui Ren
AAML
63
4
0
22 Feb 2025
BackdoorBench: A Comprehensive Benchmark and Analysis of Backdoor Learning
Baoyuan Wu
Hongrui Chen
Ruotong Wang
Zihao Zhu
Shaokui Wei
Danni Yuan
Mingli Zhu
Ke Xu
Li Liu
Chaoxiao Shen
AAML
ELM
83
10
0
26 Jan 2024
Unlearnable Examples Give a False Sense of Security: Piercing through Unexploitable Data with Learnable Examples
Wanzhu Jiang
Yunfeng Diao
He Wang
Jianxin Sun
Ming Wang
Richang Hong
55
18
0
16 May 2023
DeepPayload: Black-box Backdoor Attack on Deep Learning Models through Neural Payload Injection
Yan Liang
Jiayi Hua
Haoyu Wang
Chunyang Chen
Yunxin Liu
FedML
SILM
86
75
0
18 Jan 2021
1