Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2403.16187
Cited By
ALoRA: Allocating Low-Rank Adaptation for Fine-tuning Large Language Models
24 March 2024
Zequan Liu
Jiawen Lyn
Wei-wei Zhu
Xing Tian
Yvette Graham
OffRL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"ALoRA: Allocating Low-Rank Adaptation for Fine-tuning Large Language Models"
9 / 9 papers shown
Title
DeLoRA: Decoupling Angles and Strength in Low-rank Adaptation
Massimo Bini
Leander Girrbach
Zeynep Akata
42
0
0
23 Mar 2025
Dynamic Low-Rank Sparse Adaptation for Large Language Models
Weizhong Huang
Yuxin Zhang
Xiawu Zheng
Yong-Jin Liu
Jing Lin
Yiwu Yao
Rongrong Ji
97
1
0
21 Feb 2025
Adaptive Rank, Reduced Forgetting: Knowledge Retention in Continual Learning Vision-Language Models with Dynamic Rank-Selective LoRA
Haodong Lu
Chongyang Zhao
Jason Xue
Lina Yao
Kristen Moore
Dong Gong
VLM
KELM
CLL
88
3
0
01 Dec 2024
Sparse Matrix in Large Language Model Fine-tuning
Haoze He
Juncheng Billy Li
Xuan Jiang
Heather Miller
MoE
27
3
0
24 May 2024
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
333
11,953
0
04 Mar 2022
Multitask Prompted Training Enables Zero-Shot Task Generalization
Victor Sanh
Albert Webson
Colin Raffel
Stephen H. Bach
Lintang Sutawika
...
T. Bers
Stella Biderman
Leo Gao
Thomas Wolf
Alexander M. Rush
LRM
213
1,657
0
15 Oct 2021
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Xiao Liu
Kaixuan Ji
Yicheng Fu
Weng Lam Tam
Zhengxiao Du
Zhilin Yang
Jie Tang
VLM
238
806
0
14 Oct 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,858
0
18 Apr 2021
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
297
6,959
0
20 Apr 2018
1