ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.08667
  4. Cited By
Data and Model Poisoning Backdoor Attacks on Wireless Federated
  Learning, and the Defense Mechanisms: A Comprehensive Survey

Data and Model Poisoning Backdoor Attacks on Wireless Federated Learning, and the Defense Mechanisms: A Comprehensive Survey

14 December 2023
Yichen Wan
Youyang Qu
Wei Ni
Yong Xiang
Longxiang Gao
Ekram Hossain
    AAML
ArXivPDFHTML

Papers citing "Data and Model Poisoning Backdoor Attacks on Wireless Federated Learning, and the Defense Mechanisms: A Comprehensive Survey"

12 / 12 papers shown
Title
SMTFL: Secure Model Training to Untrusted Participants in Federated Learning
SMTFL: Secure Model Training to Untrusted Participants in Federated Learning
Zhihui Zhao
Xiaorong Dong
Yimo Ren
Jianhua Wang
Dan Yu
Hongsong Zhu
Yongle Chen
82
0
0
24 Feb 2025
Comments on "Privacy-Enhanced Federated Learning Against Poisoning
  Adversaries"
Comments on "Privacy-Enhanced Federated Learning Against Poisoning Adversaries"
T. Schneider
Ajith Suresh
Hossein Yalame
FedML
18
9
0
30 Sep 2024
Backdoor Attacks and Defenses in Federated Learning: Survey, Challenges
  and Future Research Directions
Backdoor Attacks and Defenses in Federated Learning: Survey, Challenges and Future Research Directions
Thuy-Dung Nguyen
Tuan Nguyen
Phi Le Nguyen
Hieu H. Pham
Khoa D. Doan
Kok-Seng Wong
AAML
FedML
40
56
0
03 Mar 2023
NCL: Textual Backdoor Defense Using Noise-augmented Contrastive Learning
NCL: Textual Backdoor Defense Using Noise-augmented Contrastive Learning
Shengfang Zhai
Qingni Shen
Xiaoyi Chen
Weilong Wang
Cong Li
Yuejian Fang
Zhonghai Wu
AAML
42
8
0
03 Mar 2023
FL-Defender: Combating Targeted Attacks in Federated Learning
FL-Defender: Combating Targeted Attacks in Federated Learning
N. Jebreel
J. Domingo-Ferrer
AAML
FedML
43
56
0
02 Jul 2022
Test-Time Detection of Backdoor Triggers for Poisoned Deep Neural
  Networks
Test-Time Detection of Backdoor Triggers for Poisoned Deep Neural Networks
Xi Li
Zhen Xiang
David J. Miller
G. Kesidis
AAML
120
13
0
06 Dec 2021
Decentralized Wireless Federated Learning with Differential Privacy
Decentralized Wireless Federated Learning with Differential Privacy
Shuzhen Chen
Dongxiao Yu
Yifei Zou
Jiguo Yu
Xiuzhen Cheng
43
50
0
19 Sep 2021
DeepPayload: Black-box Backdoor Attack on Deep Learning Models through
  Neural Payload Injection
DeepPayload: Black-box Backdoor Attack on Deep Learning Models through Neural Payload Injection
Yuanchun Li
Jiayi Hua
Haoyu Wang
Chunyang Chen
Yunxin Liu
FedML
SILM
86
75
0
18 Jan 2021
Privacy and Robustness in Federated Learning: Attacks and Defenses
Privacy and Robustness in Federated Learning: Attacks and Defenses
Lingjuan Lyu
Han Yu
Xingjun Ma
Chen Chen
Lichao Sun
Jun Zhao
Qiang Yang
Philip S. Yu
FedML
183
355
0
07 Dec 2020
Mitigating backdoor attacks in LSTM-based Text Classification Systems by
  Backdoor Keyword Identification
Mitigating backdoor attacks in LSTM-based Text Classification Systems by Backdoor Keyword Identification
Chuanshuai Chen
Jiazhu Dai
SILM
55
126
0
11 Jul 2020
Clean-Label Backdoor Attacks on Video Recognition Models
Clean-Label Backdoor Attacks on Video Recognition Models
Shihao Zhao
Xingjun Ma
Xiang Zheng
James Bailey
Jingjing Chen
Yu-Gang Jiang
AAML
196
252
0
06 Mar 2020
Analyzing Federated Learning through an Adversarial Lens
Analyzing Federated Learning through an Adversarial Lens
A. Bhagoji
Supriyo Chakraborty
Prateek Mittal
S. Calo
FedML
191
1,032
0
29 Nov 2018
1