Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2410.18035
Cited By
MiLoRA: Efficient Mixture of Low-Rank Adaptation for Large Language Models Fine-tuning
23 October 2024
Jingfan Zhang
Yi Zhao
Dan Chen
Xing Tian
Huanran Zheng
Wei Zhu
MoE
Re-assign community
ArXiv
PDF
HTML
Papers citing
"MiLoRA: Efficient Mixture of Low-Rank Adaptation for Large Language Models Fine-tuning"
3 / 3 papers shown
Title
Compositional Subspace Representation Fine-tuning for Adaptive Large Language Models
Andy Zhou
MoMe
92
0
0
13 Mar 2025
Adaptive Rank, Reduced Forgetting: Knowledge Retention in Continual Learning Vision-Language Models with Dynamic Rank-Selective LoRA
Haodong Lu
Chongyang Zhao
Jason Xue
Lina Yao
Kristen Moore
Dong Gong
VLM
KELM
CLL
85
3
0
01 Dec 2024
LoRTA: Low Rank Tensor Adaptation of Large Language Models
Ignacio Hounie
Charilaos I. Kanatsoulis
Arnuv Tandon
Alejandro Ribeiro
36
0
0
05 Oct 2024
1