ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.17513
33
51

The Expressive Power of Low-Rank Adaptation

26 October 2023
Yuchen Zeng
Kangwook Lee
ArXivPDFHTML
Abstract

Low-Rank Adaptation (LoRA), a parameter-efficient fine-tuning method that leverages low-rank adaptation of weight matrices, has emerged as a prevalent technique for fine-tuning pre-trained models such as large language models and diffusion models. Despite its huge success in practice, the theoretical underpinnings of LoRA have largely remained unexplored. This paper takes the first step to bridge this gap by theoretically analyzing the expressive power of LoRA. We prove that, for fully connected neural networks, LoRA can adapt any model fff to accurately represent any smaller target model f‾\overline{f}f​ if LoRA-rank ≥(width of f)×depth of f‾depth of f\geq(\text{width of }f) \times \frac{\text{depth of }\overline{f}}{\text{depth of }f}≥(width of f)×depth of fdepth of f​​. We also quantify the approximation error when LoRA-rank is lower than the threshold. For Transformer networks, we show any model can be adapted to a target model of the same size with rank-(embedding size2)(\frac{\text{embedding size}}{2})(2embedding size​) LoRA adapters.

View on arXiv
Comments on this paper