ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.02724
47
0

WeightLoRA: Keep Only Necessary Adapters

3 June 2025
Andrey Veprikov
Vladimir Solodkin
Alexander Zyl
Andrey Savchenko
Aleksandr Beznosikov
ArXivPDFHTML
Abstract

The widespread utilization of language models in modern applications is inconceivable without Parameter-Efficient Fine-Tuning techniques, such as low-rank adaptation (LoRA\texttt{LoRA}LoRA), which adds trainable adapters to selected layers. Although LoRA\texttt{LoRA}LoRA may obtain accurate solutions, it requires significant memory to train large models and intuition on which layers to add adapters. In this paper, we propose a novel method, WeightLoRA\texttt{WeightLoRA}WeightLoRA, which overcomes this issue by adaptive selection of the most critical LoRA\texttt{LoRA}LoRA heads throughout the optimization process. As a result, we can significantly reduce the number of trainable parameters while maintaining the capability to obtain consistent or even superior metric values. We conduct experiments for a series of competitive benchmarks and DeBERTa, BART, and Llama models, comparing our method with different adaptive approaches. The experimental results demonstrate the efficacy of WeightLoRA\texttt{WeightLoRA}WeightLoRA and the superior performance of WeightLoRA+\texttt{WeightLoRA+}WeightLoRA+ in almost all cases.

View on arXiv
@article{veprikov2025_2506.02724,
  title={ WeightLoRA: Keep Only Necessary Adapters },
  author={ Andrey Veprikov and Vladimir Solodkin and Alexander Zyl and Andrey Savchenko and Aleksandr Beznosikov },
  journal={arXiv preprint arXiv:2506.02724},
  year={ 2025 }
}
Comments on this paper