24
3

NeuLite: Memory-Efficient Federated Learning via Elastic Progressive Training

Abstract

Federated Learning (FL) emerges as a new learning paradigm that enables multiple devices to collaboratively train a shared model while preserving data privacy. However, intensive memory footprint during the training process severely bottlenecks the deployment of FL on resource-constrained devices in real-world cases. In this paper, we propose NeuLite, a framework that breaks the memory wall through elastic progressive training. Unlike traditional FL, which updates the full model during the whole training procedure, NeuLite divides the model into blocks and conducts the training process in a progressive manner. Except for the progressive training paradigm, NeuLite further features the following two key components to guide the training process: 1) curriculum mentor and 2) training harmonizer. Specifically, the Curriculum Mentor devises curriculum-aware training losses for each block, assisting them in learning the expected feature representation and mitigating the loss of valuable information. Additionally, the Training Harmonizer develops a parameter co-adaptation training paradigm to break the information isolation across blocks from both forward and backward propagation. Furthermore, it constructs output modules for each block to strengthen model parameter co-adaptation. Extensive experiments are conducted to evaluate the effectiveness of NeuLite across both simulation and hardware testbeds. The results demonstrate that NeuLite effectively reduces peak memory usage by up to 50.4%. It also enhances model performance by up to 84.2% and accelerates the training process by up to 1.9X.

View on arXiv
Comments on this paper