Federated learning is vulnerable to poisoning attacks by malicious adversaries. Existing methods often involve high costs to achieve effective attacks. To address this challenge, we propose a sybil-based virtual data poisoning attack, where a malicious client generates sybil nodes to amplify the poisoning model's impact. To reduce neural network computational complexity, we develop a virtual data generation method based on gradient matching. We also design three schemes for target model acquisition, applicable to online local, online global, and offline scenarios. In simulation, our method outperforms other attack algorithms since our method can obtain a global target model under non-independent uniformly distributed data.
View on arXiv@article{zhu2025_2505.09983, title={ Sybil-based Virtual Data Poisoning Attacks in Federated Learning }, author={ Changxun Zhu and Qilong Wu and Lingjuan Lyu and Shibei Xue }, journal={arXiv preprint arXiv:2505.09983}, year={ 2025 } }