17
1

Assortment of Attention Heads: Accelerating Federated PEFT with Head Pruning and Strategic Client Selection

Main:18 Pages
5 Figures
Bibliography:8 Pages
7 Tables
Appendix:1 Pages
Abstract

Parameter Efficient Fine-Tuning (PEFT) has become the de-facto approach in adapting Large Language Models (LLMs) for downstream tasks in Natural Language Processing. However, its adoption in privacy-preserving distributed learning frameworks, such as Federated Learning (FL), remains relatively limited. This is mainly due to challenges specific to FL, such as resource-constrained devices and diverse data distributions among clients. In this paper, we propose an efficient method to perform PEFT within the FL framework for Multi-Head Attention (MHA) based language models. We address the challenges through head pruning, a novel head-specific weighted aggregation mechanism, and a client selection strategy. Head pruning minimizes training complexity within the clients, guided by the importance score computed based on the confidence of the attention head. Weighted aggregation of heads ensures the global model captures crucial updates from diverse clients complementing our client selection strategy. We show results on the MultiNLI benchmark along with 20 Newsgroups, XL-Sum, and E2E NLG datasets. We use the MultiNLI dataset and T5-small model with LoRA as our PEFT method, attaining sparsity levels of up to 90%, resulting in a communication advantage of up to 1.8x and a reduction in training OPs of 3.9x while maintaining the accuracy drop under 2%.

View on arXiv
@article{venkatesha2025_2506.00743,
  title={ Assortment of Attention Heads: Accelerating Federated PEFT with Head Pruning and Strategic Client Selection },
  author={ Yeshwanth Venkatesha and Souvik Kundu and Priyadarshini Panda },
  journal={arXiv preprint arXiv:2506.00743},
  year={ 2025 }
}
Comments on this paper