ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.13674
7
0
v1v2 (latest)

Prefix-Tuning+: Modernizing Prefix-Tuning by Decoupling the Prefix from Attention

16 June 2025
Haonan Wang
Brian K Chen
Siquan Li
Xinhe Liang
Hwee Kuan Lee
Kenji Kawaguchi
Tianyang Hu
ArXiv (abs)PDFHTML
Main:10 Pages
9 Figures
Bibliography:3 Pages
4 Tables
Appendix:6 Pages
Abstract

Parameter-Efficient Fine-Tuning (PEFT) methods have become crucial for rapidly adapting large language models (LLMs) to downstream tasks. Prefix-Tuning, an early and effective PEFT technique, demonstrated the ability to achieve performance comparable to full fine-tuning with significantly reduced computational and memory overhead. However, despite its earlier success, its effectiveness in training modern state-of-the-art LLMs has been very limited. In this work, we demonstrate empirically that Prefix-Tuning underperforms on LLMs because of an inherent tradeoff between input and prefix significance within the attention head. This motivates us to introduce Prefix-Tuning+, a novel architecture that generalizes the principles of Prefix-Tuning while addressing its shortcomings by shifting the prefix module out of the attention head itself. We further provide an overview of our construction process to guide future users when constructing their own context-based methods. Our experiments show that, across a diverse set of benchmarks, Prefix-Tuning+ consistently outperforms existing Prefix-Tuning methods. Notably, it achieves performance on par with the widely adopted LoRA method on several general benchmarks, highlighting the potential modern extension of Prefix-Tuning approaches. Our findings suggest that by overcoming its inherent limitations, Prefix-Tuning can remain a competitive and relevant research direction in the landscape of parameter-efficient LLM adaptation.

View on arXiv
@article{wang2025_2506.13674,
  title={ Prefix-Tuning+: Modernizing Prefix-Tuning by Decoupling the Prefix from Attention },
  author={ Haonan Wang and Brian Chen and Siquan Li and Xinhe Liang and Hwee Kuan Lee and Kenji Kawaguchi and Tianyang Hu },
  journal={arXiv preprint arXiv:2506.13674},
  year={ 2025 }
}
Comments on this paper