ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.17750
42
0

Serial Low-rank Adaptation of Vision Transformer

22 March 2025
Houqiang Zhong
Shaocheng Shen
Ke Cai
Zhenglong Wu
Jiangchao Yao
Yuan Cheng
Xuefei Li
Xiaoyun Zhang
Li-Na Song
Qiang Hu
ArXivPDFHTML
Abstract

Fine-tuning large pre-trained vision foundation models in a parameter-efficient manner is critical for downstream vision tasks, considering the practical constraints of computational and storage costs. Low-rank adaptation (LoRA) is a well-established technique in this domain, achieving impressive efficiency by reducing the parameter space to a low-rank form. However, developing more advanced low-rank adaptation methods to reduce parameters and memory requirements remains a significant challenge in resource-constrained application scenarios. In this study, we consider on top of the commonly used vision transformer and propose Serial LoRA, a novel LoRA variant that introduces a shared low-rank matrix serially composite with the attention mechanism. Such a design extracts the underlying commonality of parameters in adaptation, significantly reducing redundancy. Notably, Serial LoRA uses only 1/4 parameters of LoRA but achieves comparable performance in most cases. We conduct extensive experiments on a range of vision foundation models with the transformer structure, and the results confirm consistent superiority of our method.

View on arXiv
@article{zhong2025_2503.17750,
  title={ Serial Low-rank Adaptation of Vision Transformer },
  author={ Houqiang Zhong and Shaocheng Shen and Ke Cai and Zhenglong Wu and Jiangchao Yao and Yuan Cheng and Xuefei Li and Xiaoyun Zhang and Li Song and Qiang Hu },
  journal={arXiv preprint arXiv:2503.17750},
  year={ 2025 }
}
Comments on this paper