Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2406.13175
Cited By
Sparse High Rank Adapters
28 January 2025
K. Bhardwaj
N. Pandey
Sweta Priyadarshi
Viswanath Ganapathy
Rafael Esteves
Shreya Kadambi
Shubhankar Borse
P. Whatmough
Risheek Garrepalli
M. V. Baalen
Harris Teague
Markus Nagel
MQ
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Sparse High Rank Adapters"
32 / 32 papers shown
Title
PaCA: Partial Connection Adaptation for Efficient Fine-Tuning
Sunghyeon Woo
Sol Namkung
Sunwoo Lee
Inho Jeong
Beomseok Kim
Dongsuk Jeon
95
0
0
28 Feb 2025
SubZero: Composing Subject, Style, and Action via Zero-Shot Personalization
Shubhankar Borse
K. Bhardwaj
Mohammad Reza Karimi Dastjerdi
Hyojin Park
Shreya Kadambi
...
Prathamesh Mandke
Ankita Nayak
Harris Teague
Munawar Hayat
Fatih Porikli
DiffM
135
1
0
27 Feb 2025
Is Parameter Collision Hindering Continual Learning in LLMs?
Shuo Yang
Kun-Peng Ning
Yu-Yang Liu
Jia-Yu Yao
Yong-Hong Tian
Yi-Bing Song
Li Yuan
MoMe
CLL
89
4
0
14 Oct 2024
Rapid Switching and Multi-Adapter Fusion via Sparse High Rank Adapters
Kartikeya Bhardwaj
N. Pandey
Sweta Priyadarshi
Viswanath Ganapathy
Rafael Esteves
...
Paul N. Whatmough
Risheek Garrepalli
M. V. Baalen
Harris Teague
Markus Nagel
MoMe
47
0
0
22 Jul 2024
LoRA+: Efficient Low Rank Adaptation of Large Models
Soufiane Hayou
Nikhil Ghosh
Bin Yu
AI4CE
88
173
0
19 Feb 2024
DoRA: Weight-Decomposed Low-Rank Adaptation
Shih-yang Liu
Chien-Yi Wang
Hongxu Yin
Pavlo Molchanov
Yu-Chiang Frank Wang
Kwang-Ting Cheng
Min-Hung Chen
86
395
0
14 Feb 2024
RoSA: Accurate Parameter-Efficient Fine-Tuning via Robust Adaptation
Mahdi Nikdan
Soroush Tabesh
Elvir Crnčević
Dan Alistarh
69
30
0
09 Jan 2024
A Rank Stabilization Scaling Factor for Fine-Tuning with LoRA
Damjan Kalajdzievski
ALM
63
96
0
28 Nov 2023
ZipLoRA: Any Subject in Any Style by Effectively Merging LoRAs
Viraj Shah
Nataniel Ruiz
Forrester Cole
Erika Lu
Svetlana Lazebnik
Yuanzhen Li
Varun Jampani
DiffM
97
111
0
22 Nov 2023
Sparse Low-rank Adaptation of Pre-trained Language Models
Ning Ding
Xingtai Lv
Qiaosen Wang
Yulin Chen
Bowen Zhou
Zhiyuan Liu
Maosong Sun
56
64
0
20 Nov 2023
Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch
Le Yu
Yu Bowen
Haiyang Yu
Fei Huang
Yongbin Li
MoMe
98
317
0
06 Nov 2023
Orthogonal Subspace Learning for Language Model Continual Learning
Xiao Wang
Tianze Chen
Qiming Ge
Han Xia
Rong Bao
Rui Zheng
Qi Zhang
Tao Gui
Xuanjing Huang
CLL
150
104
0
22 Oct 2023
VeRA: Vector-based Random Matrix Adaptation
D. J. Kopiczko
Tijmen Blankevoort
Yuki Markus Asano
VLM
77
156
0
17 Oct 2023
LoRA-FA: Memory-efficient Low-rank Adaptation for Large Language Models Fine-tuning
Longteng Zhang
Lin Zhang
Shaoshuai Shi
Xiaowen Chu
Yue Liu
AI4CE
49
99
0
07 Aug 2023
Llama 2: Open Foundation and Fine-Tuned Chat Models
Hugo Touvron
Louis Martin
Kevin R. Stone
Peter Albert
Amjad Almahairi
...
Sharan Narang
Aurelien Rodriguez
Robert Stojnic
Sergey Edunov
Thomas Scialom
AI4MH
ALM
290
11,858
0
18 Jul 2023
SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis
Dustin Podell
Zion English
Kyle Lacey
A. Blattmann
Tim Dockhorn
Jonas Muller
Joe Penna
Robin Rombach
212
2,356
0
04 Jul 2023
A Simple and Effective Pruning Approach for Large Language Models
Mingjie Sun
Zhuang Liu
Anna Bair
J. Zico Kolter
128
416
0
20 Jun 2023
Human Preference Score v2: A Solid Benchmark for Evaluating Human Preferences of Text-to-Image Synthesis
Xiaoshi Wu
Yiming Hao
Keqiang Sun
Yixiong Chen
Feng Zhu
Rui Zhao
Hongsheng Li
80
289
0
15 Jun 2023
Mix-of-Show: Decentralized Low-Rank Adaptation for Multi-Concept Customization of Diffusion Models
Yuchao Gu
Xintao Wang
Jay Zhangjie Wu
Yujun Shi
Yunpeng Chen
...
Shuning Chang
Wei Wu
Yixiao Ge
Ying Shan
Mike Zheng Shou
DiffM
99
174
0
29 May 2023
LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models
Zhiqiang Hu
Lei Wang
Yihuai Lan
Wanyu Xu
Ee-Peng Lim
Lidong Bing
Xing Xu
Soujanya Poria
Roy Ka-wei Lee
ALM
96
261
0
04 Apr 2023
LLaMA: Open and Efficient Foundation Language Models
Hugo Touvron
Thibaut Lavril
Gautier Izacard
Xavier Martinet
Marie-Anne Lachaux
...
Faisal Azhar
Aurelien Rodriguez
Armand Joulin
Edouard Grave
Guillaume Lample
ALM
PILM
1.5K
13,182
0
27 Feb 2023
SparseAdapter: An Easy Approach for Improving the Parameter-Efficiency of Adapters
Shwai He
Liang Ding
Daize Dong
Miao Zhang
Dacheng Tao
MoE
68
88
0
09 Oct 2022
DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation
Nataniel Ruiz
Yuanzhen Li
Varun Jampani
Yael Pritch
Michael Rubinstein
Kfir Aberman
271
2,854
0
25 Aug 2022
High-Resolution Image Synthesis with Latent Diffusion Models
Robin Rombach
A. Blattmann
Dominik Lorenz
Patrick Esser
Bjorn Ommer
3DV
410
15,486
0
20 Dec 2021
Training Neural Networks with Fixed Sparse Masks
Yi-Lin Sung
Varun Nair
Colin Raffel
FedML
80
204
0
18 Nov 2021
Composable Sparse Fine-Tuning for Cross-Lingual Transfer
Alan Ansell
Edoardo Ponti
Anna Korhonen
Ivan Vulić
CLL
MoE
121
141
0
14 Oct 2021
Raise a Child in Large Language Model: Towards Effective and Generalizable Fine-tuning
Runxin Xu
Fuli Luo
Zhiyuan Zhang
Chuanqi Tan
Baobao Chang
Songfang Huang
Fei Huang
LRM
176
187
0
13 Sep 2021
LoRA: Low-Rank Adaptation of Large Language Models
J. E. Hu
Yelong Shen
Phillip Wallis
Zeyuan Allen-Zhu
Yuanzhi Li
Shean Wang
Lu Wang
Weizhu Chen
OffRL
AI4TS
AI4CE
ALM
AIMat
413
10,328
0
17 Jun 2021
Parameter-Efficient Transfer Learning with Diff Pruning
Demi Guo
Alexander M. Rush
Yoon Kim
74
400
0
14 Dec 2020
Layer-adaptive sparsity for the Magnitude-based Pruning
Jaeho Lee
Sejun Park
Sangwoo Mo
SungSoo Ahn
Jinwoo Shin
84
203
0
15 Oct 2020
Masking as an Efficient Alternative to Finetuning for Pretrained Language Models
Mengjie Zhao
Tao R. Lin
Fei Mi
Martin Jaggi
Hinrich Schütze
50
120
0
26 Apr 2020
SNIP: Single-shot Network Pruning based on Connection Sensitivity
Namhoon Lee
Thalaiyasingam Ajanthan
Philip Torr
VLM
252
1,199
0
04 Oct 2018
1