FedProphet: Memory-Efficient Federated Adversarial Training via Robust and Consistent Cascade Learning

Federated Adversarial Training (FAT) can supplement robustness against adversarial examples to Federated Learning (FL), promoting a meaningful step toward trustworthy AI. However, FAT requires large models to preserve high accuracy while achieving strong robustness, incurring high memory-swapping latency when training on memory-constrained edge devices. Existing memory-efficient FL methods suffer from poor accuracy and weak robustness due to inconsistent local and global models. In this paper, we propose FedProphet, a novel FAT framework that can achieve memory efficiency, robustness, and consistency simultaneously. FedProphget reduces the memory requirement in local training while guaranteeing adversarial robustness by adversarial cascade learning with strong convexity regularization, and we show that the strong robustness also implies low inconsistency in FedProphet. We also develop a training coordinator on the server of FL, with Adaptive Perturbation Adjustment for utility-robustness balance and Differentiated Module Assignment for objective inconsistency mitigation. FedPeophet significantly outperforms other baselines under different experimental settings, maintaining the accuracy and robustness of end-to-end FAT with 80% memory reduction and up to 10.8x speedup in training time.
View on arXiv@article{tang2025_2409.08372, title={ FedProphet: Memory-Efficient Federated Adversarial Training via Robust and Consistent Cascade Learning }, author={ Minxue Tang and Yitu Wang and Jingyang Zhang and Louis DiValentin and Aolin Ding and Amin Hass and Yiran Chen and Hai "Helen" Li }, journal={arXiv preprint arXiv:2409.08372}, year={ 2025 } }