ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.21051
5
0

SHE-LoRA: Selective Homomorphic Encryption for Federated Tuning with Heterogeneous LoRA

27 May 2025
Jianmin Liu
Li Yan
Borui Li
Lei Yu
Chao Shen
ArXivPDFHTML
Abstract

Federated fine-tuning of large language models (LLMs) is critical for improving their performance in handling domain-specific tasks. However, prior work has shown that clients' private data can actually be recovered via gradient inversion attacks. Existing privacy preservation techniques against such attacks typically entail performance degradation and high costs, making them ill-suited for clients with heterogeneous data distributions and device capabilities. In this paper, we propose SHE-LoRA, which integrates selective homomorphic encryption (HE) and low-rank adaptation (LoRA) to enable efficient and privacy-preserving federated tuning of LLMs in cross-device environment. Heterogeneous clients adaptively select partial model parameters for homomorphic encryption based on parameter sensitivity assessment, with the encryption subset obtained via negotiation. To ensure accurate model aggregation, we design a column-aware secure aggregation method and customized reparameterization techniques to align the aggregation results with the heterogeneous device capabilities of clients. Extensive experiments demonstrate that SHE-LoRA maintains performance comparable to non-private baselines, achieves strong resistance to the state-of-the-art attacks, and significantly reduces communication overhead by 94.901\% and encryption computation overhead by 99.829\%, compared to baseline. Our code is accessible atthis https URL.

View on arXiv
@article{liu2025_2505.21051,
  title={ SHE-LoRA: Selective Homomorphic Encryption for Federated Tuning with Heterogeneous LoRA },
  author={ Jianmin Liu and Li Yan and Borui Li and Lei Yu and Chao Shen },
  journal={arXiv preprint arXiv:2505.21051},
  year={ 2025 }
}
Comments on this paper