Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2110.11571
Cited By
Anti-Backdoor Learning: Training Clean Models on Poisoned Data
22 October 2021
Yige Li
X. Lyu
Nodens Koren
Lingjuan Lyu
Bo-wen Li
Xingjun Ma
OnRL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Anti-Backdoor Learning: Training Clean Models on Poisoned Data"
6 / 206 papers shown
Title
Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch
Hossein Souri
Liam H. Fowl
Ramalingam Chellappa
Micah Goldblum
Tom Goldstein
SILM
31
123
0
16 Jun 2021
What Doesn't Kill You Makes You Robust(er): How to Adversarially Train against Data Poisoning
Jonas Geiping
Liam H. Fowl
Gowthami Somepalli
Micah Goldblum
Michael Moeller
Tom Goldstein
TDI
AAML
SILM
22
40
0
26 Feb 2021
Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training
Lue Tao
Lei Feng
Jinfeng Yi
Sheng-Jun Huang
Songcan Chen
AAML
28
71
0
09 Feb 2021
Privacy and Robustness in Federated Learning: Attacks and Defenses
Lingjuan Lyu
Han Yu
Xingjun Ma
Chen Chen
Lichao Sun
Jun Zhao
Qiang Yang
Philip S. Yu
FedML
183
355
0
07 Dec 2020
Backdoor Learning: A Survey
Yiming Li
Yong Jiang
Zhifeng Li
Shutao Xia
AAML
45
586
0
17 Jul 2020
Clean-Label Backdoor Attacks on Video Recognition Models
Shihao Zhao
Xingjun Ma
Xiang Zheng
James Bailey
Jingjing Chen
Yu-Gang Jiang
AAML
196
274
0
06 Mar 2020
Previous
1
2
3
4
5