ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.03719
28
0

Towards Symmetric Low-Rank Adapters

29 March 2025
Tales Panoutsos
Rodrygo L. T. Santos
Flavio Figueiredo
ArXivPDFHTML
Abstract

In this paper, we introduce Symmetric Low-Rank Adapters, an optimized variant of LoRA with even fewer weights. This method utilizes Low-Rank Symmetric Weight Matrices to learn downstream tasks more efficiently. Traditional LoRA accumulates fine-tuning weights with the original pre-trained weights via a Singular Value Decomposition (SVD) like approach, i.e., model weights are fine-tuned via updates of the form BABABA (where B∈Rn×rB \in \mathbb{R}^{n\times r}B∈Rn×r, A∈Rr×nA \in \mathbb{R}^{r\times n}A∈Rr×n, and rrr is the rank of the merged weight matrix). In contrast, our approach, named SymLoRA, represents fine-tuning weights as a Spectral Decomposition, i.e., Q diag(Λ) QTQ \, diag(\Lambda)\, Q^TQdiag(Λ)QT, where Q∈Rn×rQ \in \mathbb{R}^{n\times r}Q∈Rn×r and Λ∈Rr\Lambda \in \mathbb{R}^rΛ∈Rr. SymLoRA requires approximately half of the finetuning weights. Here, we show that this approach has negligible losses in downstream efficacy.

View on arXiv
@article{panoutsos2025_2504.03719,
  title={ Towards Symmetric Low-Rank Adapters },
  author={ Tales Panoutsos and Rodrygo L. T. Santos and Flavio Figueiredo },
  journal={arXiv preprint arXiv:2504.03719},
  year={ 2025 }
}
Comments on this paper