92
2

Parameter-Efficient Fine-Tuning via Circular Convolution

Abstract

Low-Rank Adaptation (LoRA) has gained popularity for fine-tuning large foundation models, leveraging low-rank matrices A\mathbf{A} and B\mathbf{B} to represent weight changes (i.e., ΔW=BA\Delta \mathbf{W} = \mathbf{B} \mathbf{A}). This method reduces trainable parameters and mitigates heavy memory consumption associated with full delta matrices by sequentially multiplying A\mathbf{A} and B\mathbf{B} with the activation. Despite its success, the intrinsic low-rank characteristic may limit its performance. Although several variants have been proposed to address this issue, they often overlook the crucial computational and memory efficiency brought by LoRA. In this paper, we propose Circular Convolution Adaptation (C3^3A), which not only achieves high-rank adaptation with enhanced performance but also excels in both computational power and memory utilization. Extensive experiments demonstrate that C3^3A consistently outperforms LoRA and its variants across various fine-tuning tasks.

View on arXiv
@article{chen2025_2407.19342,
  title={ Parameter-Efficient Fine-Tuning via Circular Convolution },
  author={ Aochuan Chen and Jiashun Cheng and Zijing Liu and Ziqi Gao and Fugee Tsung and Yu Li and Jia Li },
  journal={arXiv preprint arXiv:2407.19342},
  year={ 2025 }
}
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.