ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.05431
54
3

Quantum-PEFT: Ultra parameter-efficient fine-tuning

7 March 2025
Toshiaki Koike-Akino
F. Tonin
Yongtao Wu
Frank Zhengqing Wu
Leyla Naz Candogan
V. Cevher
    MQ
ArXivPDFHTML
Abstract

This paper introduces Quantum-PEFT that leverages quantum computations for parameter-efficient fine-tuning (PEFT). Unlike other additive PEFT methods, such as low-rank adaptation (LoRA), Quantum-PEFT exploits an underlying full-rank yet surprisingly parameter efficient quantum unitary parameterization. With the use of Pauli parameterization, the number of trainable parameters grows only logarithmically with the ambient dimension, as opposed to linearly as in LoRA-based PEFT methods. Quantum-PEFT achieves vanishingly smaller number of trainable parameters than the lowest-rank LoRA as dimensions grow, enhancing parameter efficiency while maintaining a competitive performance. We apply Quantum-PEFT to several transfer learning benchmarks in language and vision, demonstrating significant advantages in parameter efficiency.

View on arXiv
@article{koike-akino2025_2503.05431,
  title={ Quantum-PEFT: Ultra parameter-efficient fine-tuning },
  author={ Toshiaki Koike-Akino and Francesco Tonin and Yongtao Wu and Frank Zhengqing Wu and Leyla Naz Candogan and Volkan Cevher },
  journal={arXiv preprint arXiv:2503.05431},
  year={ 2025 }
}
Comments on this paper