FedCC: Robust Federated Learning against Model Poisoning Attacks

Federated learning is a distributed framework designed to address privacy concerns. However, it introduces new attack surfaces, which are especially prone when data is non-Independently and Identically Distributed. Existing approaches fail to effectively mitigate the malicious influence in this setting; previous approaches often tackle non-IID data and poisoning attacks separately. To address both challenges simultaneously, we present FedCC, a simple yet effective novel defense algorithm against model poisoning attacks. It leverages the Centered Kernel Alignment similarity of Penultimate Layer Representations for clustering, allowing the identification and filtration of malicious clients, even in non-IID data settings. The penultimate layer representations are meaningful since the later layers are more sensitive to local data distributions, which allows better detection of malicious clients. The sophisticated utilization of layer-wise Centered Kernel Alignment similarity allows attack mitigation while leveraging useful knowledge obtained. Our extensive experiments demonstrate the effectiveness of FedCC in mitigating both untargeted model poisoning and targeted backdoor attacks. Compared to existing outlier detection-based and first-order statistics-based methods, FedCC consistently reduces attack confidence to zero. Specifically, it significantly minimizes the average degradation of global performance by 65.5\%. We believe that this new perspective on aggregation makes it a valuable contribution to the field of FL model security and privacy. The code will be made available upon acceptance.
View on arXiv@article{jeong2025_2212.01976, title={ FedCC: Robust Federated Learning against Model Poisoning Attacks }, author={ Hyejun Jeong and Hamin Son and Seohu Lee and Jayun Hyun and Tai-Myoung Chung }, journal={arXiv preprint arXiv:2212.01976}, year={ 2025 } }