ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2303.10512
  4. Cited By
AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning

AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning

18 March 2023
Qingru Zhang
Minshuo Chen
Alexander Bukharin
Nikos Karampatziakis
Pengcheng He
Yu Cheng
Weizhu Chen
Tuo Zhao
ArXivPDFHTML

Papers citing "AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning"

33 / 83 papers shown
Title
Fine-Tuning LLMs for Reliable Medical Question-Answering Services
Fine-Tuning LLMs for Reliable Medical Question-Answering Services
Ali Anaissi
Ali Braytee
Junaid Akram
LM&MA
AI4MH
36
2
0
21 Oct 2024
LDAdam: Adaptive Optimization from Low-Dimensional Gradient Statistics
LDAdam: Adaptive Optimization from Low-Dimensional Gradient Statistics
Thomas Robert
M. Safaryan
Ionut-Vlad Modoranu
Dan Alistarh
ODL
36
2
0
21 Oct 2024
Mitigating Forgetting in LLM Supervised Fine-Tuning and Preference Learning
Mitigating Forgetting in LLM Supervised Fine-Tuning and Preference Learning
H. Fernando
Han Shen
Parikshit Ram
Yi Zhou
Horst Samulowitz
Nathalie Baracaldo
Tianyi Chen
CLL
59
3
0
20 Oct 2024
BrainECHO: Semantic Brain Signal Decoding through Vector-Quantized Spectrogram Reconstruction for Whisper-Enhanced Text Generation
BrainECHO: Semantic Brain Signal Decoding through Vector-Quantized Spectrogram Reconstruction for Whisper-Enhanced Text Generation
Juntao Li
Zhenxi Song
Jiaqi Wang
Min Zhang
Honghai Liu
Min Zhang
Zhiguo Zhang
38
1
0
19 Oct 2024
LoLDU: Low-Rank Adaptation via Lower-Diag-Upper Decomposition for
  Parameter-Efficient Fine-Tuning
LoLDU: Low-Rank Adaptation via Lower-Diag-Upper Decomposition for Parameter-Efficient Fine-Tuning
Yiming Shi
Jiwei Wei
Yujia Wu
Ran Ran
Chengwei Sun
Shiyuan He
Yang Yang
ALM
43
1
0
17 Oct 2024
LoKO: Low-Rank Kalman Optimizer for Online Fine-Tuning of Large Models
LoKO: Low-Rank Kalman Optimizer for Online Fine-Tuning of Large Models
Hossein Abdi
Mingfei Sun
Andi Zhang
Samuel Kaski
Wei Pan
30
0
0
15 Oct 2024
Parenting: Optimizing Knowledge Selection of Retrieval-Augmented
  Language Models with Parameter Decoupling and Tailored Tuning
Parenting: Optimizing Knowledge Selection of Retrieval-Augmented Language Models with Parameter Decoupling and Tailored Tuning
Yongxin Xu
Ruizhe Zhang
Xinke Jiang
Yujie Feng
Yuzhen Xiao
Xinyu Ma
Runchuan Zhu
Xu Chu
Junfeng Zhao
Yasha Wang
KELM
24
4
0
14 Oct 2024
RoCoFT: Efficient Finetuning of Large Language Models with Row-Column
  Updates
RoCoFT: Efficient Finetuning of Large Language Models with Row-Column Updates
Md. Kowsher
Tara Esmaeilbeig
Chun-Nam Yu
Mojtaba Soltanalian
Niloofar Yousefi
32
0
0
14 Oct 2024
Towards Efficient Visual-Language Alignment of the Q-Former for Visual
  Reasoning Tasks
Towards Efficient Visual-Language Alignment of the Q-Former for Visual Reasoning Tasks
Sungkyung Kim
Adam Lee
Junyoung Park
Andrew Chung
Jusang Oh
Jay-Yoon Lee
26
3
0
12 Oct 2024
MTL-LoRA: Low-Rank Adaptation for Multi-Task Learning
MTL-LoRA: Low-Rank Adaptation for Multi-Task Learning
Yaming Yang
Dilxat Muhtar
Yelong Shen
Yuefeng Zhan
Jianfeng Liu
...
Denvy Deng
Feng Sun
Qi Zhang
Weizhu Chen
Yunhai Tong
MoE
MoMe
46
2
0
12 Oct 2024
Packing Analysis: Packing Is More Appropriate for Large Models or
  Datasets in Supervised Fine-tuning
Packing Analysis: Packing Is More Appropriate for Large Models or Datasets in Supervised Fine-tuning
Shuhe Wang
Guoyin Wang
Yucheng Wang
Jiwei Li
Eduard H. Hovy
Chen Guo
37
4
0
10 Oct 2024
Full-Rank No More: Low-Rank Weight Training for Modern Speech
  Recognition Models
Full-Rank No More: Low-Rank Weight Training for Modern Speech Recognition Models
Adriana Fernandez-Lopez
Shiwei Liu
L. Yin
Stavros Petridis
Maja Pantic
29
0
0
10 Oct 2024
Fine-tuning can Help Detect Pretraining Data from Large Language Models
Fine-tuning can Help Detect Pretraining Data from Large Language Models
Han Zhang
Songxin Zhang
Bingyi Jing
Hongxin Wei
43
0
0
09 Oct 2024
QERA: an Analytical Framework for Quantization Error Reconstruction
QERA: an Analytical Framework for Quantization Error Reconstruction
Cheng Zhang
Jeffrey T. H. Wong
Can Xiao
George A. Constantinides
Yiren Zhao
MQ
47
2
0
08 Oct 2024
Hyper Adversarial Tuning for Boosting Adversarial Robustness of
  Pretrained Large Vision Models
Hyper Adversarial Tuning for Boosting Adversarial Robustness of Pretrained Large Vision Models
Kangtao Lv
Huangsen Cao
Kainan Tu
Yihuai Xu
Zhimeng Zhang
Xin Ding
Yongwei Wang
MoMe
AAML
VLM
34
1
0
08 Oct 2024
AutoLoRA: AutoGuidance Meets Low-Rank Adaptation for Diffusion Models
AutoLoRA: AutoGuidance Meets Low-Rank Adaptation for Diffusion Models
Artur Kasymov
Marcin Sendera
Michał Stypułkowski
Maciej Ziȩba
Przemysław Spurek
33
1
0
04 Oct 2024
Fira: Can We Achieve Full-rank Training of LLMs Under Low-rank
  Constraint?
Fira: Can We Achieve Full-rank Training of LLMs Under Low-rank Constraint?
Xi Chen
Kaituo Feng
Changsheng Li
Xunhao Lai
Xiangyu Yue
Ye Yuan
Guoren Wang
47
7
0
02 Oct 2024
Pear: Pruning and Sharing Adapters in Visual Parameter-Efficient
  Fine-Tuning
Pear: Pruning and Sharing Adapters in Visual Parameter-Efficient Fine-Tuning
Yibo Zhong
Yao Zhou
26
0
0
29 Sep 2024
Retrieval Augmented Generation (RAG) and Beyond: A Comprehensive Survey
  on How to Make your LLMs use External Data More Wisely
Retrieval Augmented Generation (RAG) and Beyond: A Comprehensive Survey on How to Make your LLMs use External Data More Wisely
Siyun Zhao
Yuqing Yang
Zilong Wang
Zhiyuan He
Luna Qiu
Lili Qiu
SyDa
RALM
3DV
46
35
0
23 Sep 2024
Meta-Whisper: Speech-Based Meta-ICL for ASR on Low-Resource Languages
Meta-Whisper: Speech-Based Meta-ICL for ASR on Low-Resource Languages
Ming-Hao Hsu
Kuan Po Huang
Hung-yi Lee
46
1
0
16 Sep 2024
Exploring Intrinsic Language-specific Subspaces in Fine-tuning
  Multilingual Neural Machine Translation
Exploring Intrinsic Language-specific Subspaces in Fine-tuning Multilingual Neural Machine Translation
Zhe Cao
Zhi Qu
Hidetaka Kamigaito
Taro Watanabe
MoE
35
1
0
08 Sep 2024
Pre-training Everywhere: Parameter-Efficient Fine-Tuning for Medical
  Image Analysis via Target Parameter Pre-training
Pre-training Everywhere: Parameter-Efficient Fine-Tuning for Medical Image Analysis via Target Parameter Pre-training
Xingliang Lei
Yiwen Ye
Ziyang Chen
Minglei Shu
Yong Xia
50
1
0
27 Aug 2024
Flexora: Flexible Low Rank Adaptation for Large Language Models
Flexora: Flexible Low Rank Adaptation for Large Language Models
Chenxing Wei
Yao Shu
Ying Tiffany He
Fei Richard Yu
AI4CE
34
3
0
20 Aug 2024
What can Large Language Models Capture about Code Functional Equivalence?
What can Large Language Models Capture about Code Functional Equivalence?
Nickil Maveli
Antonio Vergari
Shay B. Cohen
44
3
0
20 Aug 2024
AdaRank: Disagreement Based Module Rank Prediction for Low-rank
  Adaptation
AdaRank: Disagreement Based Module Rank Prediction for Low-rank Adaptation
Yihe Dong
ALM
39
1
0
16 Aug 2024
Soft Language Prompts for Language Transfer
Soft Language Prompts for Language Transfer
Ivan Vykopal
Simon Ostermann
Marian Simko
AAML
45
1
0
02 Jul 2024
MixLoRA: Enhancing Large Language Models Fine-Tuning with LoRA-based
  Mixture of Experts
MixLoRA: Enhancing Large Language Models Fine-Tuning with LoRA-based Mixture of Experts
Dengchun Li
Yingzi Ma
Naizheng Wang
Zhengmao Ye
Zhiyuan Cheng
...
Yan Zhang
Lei Duan
Jie Zuo
Cal Yang
Mingjie Tang
MoE
40
44
0
22 Apr 2024
Tracking Meets LoRA: Faster Training, Larger Model, Stronger Performance
Tracking Meets LoRA: Faster Training, Larger Model, Stronger Performance
Liting Lin
Heng Fan
Zhipeng Zhang
Yaowei Wang
Yong-mei Xu
Haibin Ling
52
26
0
08 Mar 2024
Asymmetry in Low-Rank Adapters of Foundation Models
Asymmetry in Low-Rank Adapters of Foundation Models
Jiacheng Zhu
Kristjan Greenewald
Kimia Nadjahi
Haitz Sáez de Ocáriz Borde
Rickard Brüel-Gabrielsson
Leshem Choshen
Marzyeh Ghassemi
Mikhail Yurochkin
Justin Solomon
46
27
0
26 Feb 2024
OrchMoE: Efficient Multi-Adapter Learning with Task-Skill Synergy
OrchMoE: Efficient Multi-Adapter Learning with Task-Skill Synergy
Haowen Wang
Tao Sun
Kaixiang Ji
Jian Wang
Cong Fan
Jinjie Gu
23
1
0
19 Jan 2024
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,872
0
18 Apr 2021
Pre-trained Models for Natural Language Processing: A Survey
Pre-trained Models for Natural Language Processing: A Survey
Xipeng Qiu
Tianxiang Sun
Yige Xu
Yunfan Shao
Ning Dai
Xuanjing Huang
LM&MA
VLM
243
1,452
0
18 Mar 2020
Teaching Machines to Read and Comprehend
Teaching Machines to Read and Comprehend
Karl Moritz Hermann
Tomás Kociský
Edward Grefenstette
L. Espeholt
W. Kay
Mustafa Suleyman
Phil Blunsom
211
3,513
0
10 Jun 2015
Previous
12