ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.18656
95
0

LLM-QFL: Distilling Large Language Model for Quantum Federated Learning

24 May 2025
Dev Gurung
Shiva Raj Pokhrel
    FedML
ArXivPDFHTML
Abstract

Inspired by the power of large language models (LLMs), our research adapts them to quantum federated learning (QFL) to boost efficiency and performance. We propose a federated fine-tuning method that distills an LLM within QFL, allowing each client to locally adapt the model to its own data while preserving privacy and reducing unnecessary global updates. The fine-tuned LLM also acts as a reinforcement agent, optimizing QFL by adjusting optimizer steps, cutting down communication rounds, and intelligently selecting clients. Experiments show significant efficiency gains. We pioneer a synergy between LLM and QFL, offering: i) practical efficiency: Reduced communication costs and faster convergence. ii) theoretical rigor: Provable guarantees for adaptive federated optimization. iii) scalability: PEFT methods (LoRA, QLoRA) enable deployment on resource-constrained quantum devices. Code implementation is available here 1.

View on arXiv
@article{gurung2025_2505.18656,
  title={ LLM-QFL: Distilling Large Language Model for Quantum Federated Learning },
  author={ Dev Gurung and Shiva Raj Pokhrel },
  journal={arXiv preprint arXiv:2505.18656},
  year={ 2025 }
}
Comments on this paper