ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.16842
  4. Cited By
Asymmetry in Low-Rank Adapters of Foundation Models
v1v2 (latest)

Asymmetry in Low-Rank Adapters of Foundation Models

26 February 2024
Jiacheng Zhu
Kristjan Greenewald
Kimia Nadjahi
Haitz Sáez de Ocáriz Borde
Rickard Brüel-Gabrielsson
Leshem Choshen
Marzyeh Ghassemi
Mikhail Yurochkin
Justin Solomon
ArXiv (abs)PDFHTML

Papers citing "Asymmetry in Low-Rank Adapters of Foundation Models"

15 / 15 papers shown
Title
Noise Consistency Regularization for Improved Subject-Driven Image Synthesis
Noise Consistency Regularization for Improved Subject-Driven Image Synthesis
Yao Ni
Song Wen
Piotr Koniusz
A. Cherian
25
0
0
06 Jun 2025
PoLAR: Polar-Decomposed Low-Rank Adapter Representation
PoLAR: Polar-Decomposed Low-Rank Adapter Representation
Kai Lion
Liang Zhang
Bingcong Li
Niao He
62
0
0
03 Jun 2025
CL-LoRA: Continual Low-Rank Adaptation for Rehearsal-Free Class-Incremental Learning
CL-LoRA: Continual Low-Rank Adaptation for Rehearsal-Free Class-Incremental Learning
Jiangpeng He
Zhihao Duan
Fengqing M Zhu
CLL
31
1
0
30 May 2025
DL-QAT: Weight-Decomposed Low-Rank Quantization-Aware Training for Large Language Models
DL-QAT: Weight-Decomposed Low-Rank Quantization-Aware Training for Large Language Models
Wenjin Ke
Zhe Li
D. Li
Lu Tian
E. Barsoum
MQ
103
3
0
12 Apr 2025
DeLoRA: Decoupling Angles and Strength in Low-rank Adaptation
DeLoRA: Decoupling Angles and Strength in Low-rank Adaptation
Massimo Bini
Leander Girrbach
Zeynep Akata
228
1
0
23 Mar 2025
Quantum-PEFT: Ultra parameter-efficient fine-tuning
Toshiaki Koike-Akino
F. Tonin
Yongtao Wu
Frank Zhengqing Wu
Leyla Naz Candogan
Volkan Cevher
MQ
227
5
0
07 Mar 2025
Parameter Efficient Merging for Multimodal Large Language Models with Complementary Parameter Adaptation
Parameter Efficient Merging for Multimodal Large Language Models with Complementary Parameter Adaptation
Fanhu Zeng
Haiyang Guo
Fei Zhu
Li Shen
Hao Tang
MoMe
230
4
0
24 Feb 2025
Robust Federated Finetuning of LLMs via Alternating Optimization of LoRA
Robust Federated Finetuning of LLMs via Alternating Optimization of LoRA
Shuangyi Chen
Yuanxin Guo
Yue Ju
Harik Dalal
Ashish Khisti
115
2
0
03 Feb 2025
Decentralized Low-Rank Fine-Tuning of Large Language Models
Decentralized Low-Rank Fine-Tuning of Large Language Models
Sajjad Ghiasvand
Mahnoosh Alizadeh
Ramtin Pedarsani
ALM
154
2
0
26 Jan 2025
LoRA vs Full Fine-tuning: An Illusion of Equivalence
LoRA vs Full Fine-tuning: An Illusion of Equivalence
Reece Shuttleworth
Jacob Andreas
Antonio Torralba
Pratyusha Sharma
150
19
0
28 Oct 2024
Theoretical Insights into Fine-Tuning Attention Mechanism: Generalization and Optimization
Theoretical Insights into Fine-Tuning Attention Mechanism: Generalization and Optimization
Xinhao Yao
Hongjin Qian
Xiaolin Hu
Gengze Xu
Wei Liu
Jian Luan
Bin Wang
Yang Liu
124
1
0
03 Oct 2024
Selective Aggregation for Low-Rank Adaptation in Federated Learning
Selective Aggregation for Low-Rank Adaptation in Federated Learning
Pengxin Guo
Shuang Zeng
Y. Wang
Huijie Fan
Feifei Wang
Liangqiong Qu
FedML
155
22
0
02 Oct 2024
Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead
Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead
Rickard Brüel-Gabrielsson
Jiacheng Zhu
Onkar Bhardwaj
Leshem Choshen
Kristjan Greenewald
Mikhail Yurochkin
Justin Solomon
175
9
0
17 Jun 2024
Improving LoRA in Privacy-preserving Federated Learning
Improving LoRA in Privacy-preserving Federated Learning
Youbang Sun
Zitao Li
Yaliang Li
Bolin Ding
91
81
0
18 Mar 2024
SuperLoRA: Parameter-Efficient Unified Adaptation of Multi-Layer
  Attention Modules
SuperLoRA: Parameter-Efficient Unified Adaptation of Multi-Layer Attention Modules
Xiangyu Chen
Jing Liu
Ye Wang
Pu Wang
Matthew Brand
Guanghui Wang
T. Koike-Akino
104
7
0
18 Mar 2024
1