ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.02079
  4. Cited By
DP-InstaHide: Provably Defusing Poisoning and Backdoor Attacks with
  Differentially Private Data Augmentations

DP-InstaHide: Provably Defusing Poisoning and Backdoor Attacks with Differentially Private Data Augmentations

2 March 2021
Eitan Borgnia
Jonas Geiping
Valeriia Cherepanova
Liam H. Fowl
Arjun Gupta
Amin Ghiasi
Furong Huang
Micah Goldblum
Tom Goldstein
ArXivPDFHTML

Papers citing "DP-InstaHide: Provably Defusing Poisoning and Backdoor Attacks with Differentially Private Data Augmentations"

11 / 11 papers shown
Title
Balancing Label Imbalance in Federated Environments Using Only Mixup and
  Artificially-Labeled Noise
Balancing Label Imbalance in Federated Environments Using Only Mixup and Artificially-Labeled Noise
Kyle Rui Sang
Tahseen Rabbani
Furong Huang
FedML
36
0
0
20 Sep 2024
Does Federated Learning Really Need Backpropagation?
Does Federated Learning Really Need Backpropagation?
H. Feng
Tianyu Pang
Chao Du
Wei Chen
Shuicheng Yan
Min-Bin Lin
FedML
36
10
0
28 Jan 2023
Friendly Noise against Adversarial Noise: A Powerful Defense against
  Data Poisoning Attacks
Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning Attacks
Tianwei Liu
Yu Yang
Baharan Mirzasoleiman
AAML
30
27
0
14 Aug 2022
Fine-grained Poisoning Attack to Local Differential Privacy Protocols
  for Mean and Variance Estimation
Fine-grained Poisoning Attack to Local Differential Privacy Protocols for Mean and Variance Estimation
Xiaoguang Li
Ninghui Li
Wenhai Sun
Neil Zhenqiang Gong
Hui Li
AAML
63
15
0
24 May 2022
Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning
Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning
Hao He
Kaiwen Zha
Dina Katabi
AAML
34
32
0
22 Feb 2022
On the Effectiveness of Adversarial Training against Backdoor Attacks
On the Effectiveness of Adversarial Training against Backdoor Attacks
Yinghua Gao
Dongxian Wu
Jingfeng Zhang
Guanhao Gan
Shutao Xia
Gang Niu
Masashi Sugiyama
AAML
32
22
0
22 Feb 2022
Adversarial Examples Make Strong Poisons
Adversarial Examples Make Strong Poisons
Liam H. Fowl
Micah Goldblum
Ping Yeh-Chiang
Jonas Geiping
Wojtek Czaja
Tom Goldstein
SILM
32
132
0
21 Jun 2021
Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks
  Trained from Scratch
Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch
Hossein Souri
Liam H. Fowl
Ramalingam Chellappa
Micah Goldblum
Tom Goldstein
SILM
31
124
0
16 Jun 2021
Survey: Image Mixing and Deleting for Data Augmentation
Survey: Image Mixing and Deleting for Data Augmentation
Humza Naveed
Saeed Anwar
Munawar Hayat
Kashif Javed
Ajmal Mian
38
78
0
13 Jun 2021
AirMixML: Over-the-Air Data Mixup for Inherently Privacy-Preserving Edge
  Machine Learning
AirMixML: Over-the-Air Data Mixup for Inherently Privacy-Preserving Edge Machine Learning
Yusuke Koda
Jihong Park
M. Bennis
Praneeth Vepakomma
Ramesh Raskar
21
10
0
02 May 2021
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
  and Defenses
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Micah Goldblum
Dimitris Tsipras
Chulin Xie
Xinyun Chen
Avi Schwarzschild
D. Song
A. Madry
Bo-wen Li
Tom Goldstein
SILM
27
270
0
18 Dec 2020
1