ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.14946
25
1

A Single Linear Layer Yields Task-Adapted Low-Rank Matrices

22 March 2024
Hwichan Kim
S. Sasaki
Sho Hoshino
Ukyo Honda
ArXivPDFHTML
Abstract

Low-Rank Adaptation (LoRA) is a widely used Parameter-Efficient Fine-Tuning (PEFT) method that updates an initial weight matrix W0W_0W0​ with a delta matrix ΔW\Delta WΔW consisted by two low-rank matrices AAA and BBB. A previous study suggested that there is correlation between W0W_0W0​ and ΔW\Delta WΔW. In this study, we aim to delve deeper into relationships between W0W_0W0​ and low-rank matrices AAA and BBB to further comprehend the behavior of LoRA. In particular, we analyze a conversion matrix that transform W0W_0W0​ into low-rank matrices, which encapsulates information about the relationships. Our analysis reveals that the conversion matrices are similar across each layer. Inspired by these findings, we hypothesize that a single linear layer, which takes each layer's W0W_0W0​ as input, can yield task-adapted low-rank matrices. To confirm this hypothesis, we devise a method named Conditionally Parameterized LoRA (CondLoRA) that updates initial weight matrices with low-rank matrices derived from a single linear layer. Our empirical results show that CondLoRA maintains a performance on par with LoRA, despite the fact that the trainable parameters of CondLoRA are fewer than those of LoRA. Therefore, we conclude that "a single linear layer yields task-adapted low-rank matrices."

View on arXiv
Comments on this paper