FedCTTA: A Collaborative Approach to Continual Test-Time Adaptation in Federated Learning

Federated Learning (FL) enables collaborative model training across distributed clients without sharing raw data, making it ideal for privacy-sensitive applications. However, FL models often suffer performance degradation due to distribution shifts between training and deployment. Test-Time Adaptation (TTA) offers a promising solution by allowing models to adapt using only test samples. However, existing TTA methods in FL face challenges such as computational overhead, privacy risks from feature sharing, and scalability concerns due to memory constraints. To address these limitations, we propose Federated Continual Test-Time Adaptation (FedCTTA), a privacy-preserving and computationally efficient framework for federated adaptation. Unlike prior methods that rely on sharing local feature statistics, FedCTTA avoids direct feature exchange by leveraging similarity-aware aggregation based on model output distributions over randomly generated noise samples. This approach ensures adaptive knowledge sharing while preserving data privacy. Furthermore, FedCTTA minimizes the entropy at each client for continual adaptation, enhancing the model's confidence in evolving target distributions. Our method eliminates the need for server-side training during adaptation and maintains a constant memory footprint, making it scalable even as the number of clients or training rounds increases. Extensive experiments show that FedCTTA surpasses existing methods across diverse temporal and spatial heterogeneity scenarios.
View on arXiv@article{rajib2025_2505.13643, title={ FedCTTA: A Collaborative Approach to Continual Test-Time Adaptation in Federated Learning }, author={ Rakibul Hasan Rajib and Md Akil Raihan Iftee and Mir Sazzat Hossain and A. K. M. Mahbubur Rahman and Sajib Mistry and M Ashraful Amin and Amin Ahsan Ali }, journal={arXiv preprint arXiv:2505.13643}, year={ 2025 } }