Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2208.10224
Cited By
Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning Attacks
14 August 2022
Tianwei Liu
Yu Yang
Baharan Mirzasoleiman
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning Attacks"
13 / 13 papers shown
Title
PureEBM: Universal Poison Purification via Mid-Run Dynamics of Energy-Based Models
Omead Brandon Pooladzandi
Jeffrey Q. Jiang
Sunay Bhat
Gregory Pottie
AAML
31
0
0
28 May 2024
PureGen: Universal Data Purification for Train-Time Poison Defense via Generative Model Dynamics
Sunay Bhat
Jeffrey Q. Jiang
Omead Brandon Pooladzandi
Alexander Branch
Gregory Pottie
AAML
44
2
0
28 May 2024
FCert: Certifiably Robust Few-Shot Classification in the Era of Foundation Models
Yanting Wang
Wei Zou
Jinyuan Jia
47
1
0
12 Apr 2024
Diffusion Denoising as a Certified Defense against Clean-label Poisoning
Sanghyun Hong
Nicholas Carlini
Alexey Kurakin
DiffM
45
3
0
18 Mar 2024
Comparing Spectral Bias and Robustness For Two-Layer Neural Networks: SGD vs Adaptive Random Fourier Features
Aku Kammonen
Lisi Liang
Anamika Pandey
Raúl Tempone
26
2
0
01 Feb 2024
A Comprehensive Survey of Attack Techniques, Implementation, and Mitigation Strategies in Large Language Models
Aysan Esmradi
Daniel Wankit Yip
C. Chan
AAML
38
11
0
18 Dec 2023
Mendata: A Framework to Purify Manipulated Training Data
Zonghao Huang
Neil Zhenqiang Gong
Michael K. Reiter
19
0
0
03 Dec 2023
HINT: Healthy Influential-Noise based Training to Defend against Data Poisoning Attacks
Minh-Hao Van
Alycia N. Carey
Xintao Wu
TDI
AAML
13
2
0
15 Sep 2023
FLIRT: Feedback Loop In-context Red Teaming
Ninareh Mehrabi
Palash Goyal
Christophe Dupuy
Qian Hu
Shalini Ghosh
R. Zemel
Kai-Wei Chang
Aram Galstyan
Rahul Gupta
DiffM
21
55
0
08 Aug 2023
Rethinking Backdoor Attacks
Alaa Khaddaj
Guillaume Leclerc
Aleksandar Makelov
Kristian Georgiev
Hadi Salman
Andrew Ilyas
A. Madry
SILM
34
28
0
19 Jul 2023
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive Learning
Hritik Bansal
Nishad Singhi
Yu Yang
Fan Yin
Aditya Grover
Kai-Wei Chang
AAML
34
42
0
06 Mar 2023
Mithridates: Auditing and Boosting Backdoor Resistance of Machine Learning Pipelines
Eugene Bagdasaryan
Vitaly Shmatikov
AAML
24
2
0
09 Feb 2023
PECAN: A Deterministic Certified Defense Against Backdoor Attacks
Yuhao Zhang
Aws Albarghouthi
Loris Dántoni
AAML
30
4
0
27 Jan 2023
1