Vertical Federated Learning (VFL) has revolutionised collaborative machine learning by enabling privacy-preserving model training across multiple parties. However, it remains vulnerable to information leakage during intermediate computation sharing. While Contrastive Federated Learning (CFL) was introduced to mitigate these privacy concerns through representation learning, it still faces challenges from gradient-based attacks. This paper presents a comprehensive experimental analysis of gradient-based attacks in CFL environments and evaluates random client selection as a defensive strategy. Through extensive experimentation, we demonstrate that random client selection proves particularly effective in defending against gradient attacks in the CFL network. Our findings provide valuable insights for implementing robust security measures in contrastive federated learning systems, contributing to the development of more secure collaborative learning frameworks
View on arXiv@article{ginanjar2025_2505.10759, title={ Random Client Selection on Contrastive Federated Learning for Tabular Data }, author={ Achmad Ginanjar and Xue Li and Priyanka Singh and Wen Hua }, journal={arXiv preprint arXiv:2505.10759}, year={ 2025 } }