ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.17357
  4. Cited By
DoRA: Enhancing Parameter-Efficient Fine-Tuning with Dynamic Rank
  Distribution

DoRA: Enhancing Parameter-Efficient Fine-Tuning with Dynamic Rank Distribution

27 May 2024
Yulong Mao
Kaiyu Huang
Changhao Guan
Ganglin Bao
Fengran Mo
Jinan Xu
ArXivPDFHTML

Papers citing "DoRA: Enhancing Parameter-Efficient Fine-Tuning with Dynamic Rank Distribution"

13 / 13 papers shown
Title
TwinTURBO: Semi-Supervised Fine-Tuning of Foundation Models via Mutual Information Decompositions for Downstream Task and Latent Spaces
TwinTURBO: Semi-Supervised Fine-Tuning of Foundation Models via Mutual Information Decompositions for Downstream Task and Latent Spaces
Guillaume Quétant
Pavlo Molchanov
Slava Voloshynovskiy
47
0
0
10 Mar 2025
GeoLoRA: Geometric integration for parameter efficient fine-tuning
GeoLoRA: Geometric integration for parameter efficient fine-tuning
Steffen Schotthöfer
Emanuele Zangrando
Gianluca Ceruti
Francesco Tudisco
J. Kusch
AI4CE
31
1
0
24 Oct 2024
A Bayesian Interpretation of Adaptive Low-Rank Adaptation
A Bayesian Interpretation of Adaptive Low-Rank Adaptation
Haolin Chen
Philip N. Garner
55
1
0
16 Sep 2024
Flexora: Flexible Low Rank Adaptation for Large Language Models
Flexora: Flexible Low Rank Adaptation for Large Language Models
Chenxing Wei
Yao Shu
Ying Tiffany He
Fei Richard Yu
AI4CE
34
3
0
20 Aug 2024
Accurate and Efficient Fine-Tuning of Quantized Large Language Models
  Through Optimal Balance
Accurate and Efficient Fine-Tuning of Quantized Large Language Models Through Optimal Balance
Ao Shen
Qiang Wang
Zhiquan Lai
Xionglve Li
Dongsheng Li
ALM
MQ
32
1
0
24 Jul 2024
A Survey on LoRA of Large Language Models
A Survey on LoRA of Large Language Models
Yuren Mao
Yuhang Ge
Yijiang Fan
Wenyi Xu
Yu Mi
Zhonghao Hu
Yunjun Gao
ALM
54
25
0
08 Jul 2024
Is Fine-tuning Needed? Pre-trained Language Models Are Near Perfect for
  Out-of-Domain Detection
Is Fine-tuning Needed? Pre-trained Language Models Are Near Perfect for Out-of-Domain Detection
Rheeya Uppaal
Junjie Hu
Yixuan Li
OODD
119
33
0
22 May 2023
SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
Tu Vu
Brian Lester
Noah Constant
Rami Al-Rfou
Daniel Cer
VLM
LRM
137
277
0
15 Oct 2021
MSP: Multi-Stage Prompting for Making Pre-trained Language Models Better
  Translators
MSP: Multi-Stage Prompting for Making Pre-trained Language Models Better Translators
Zhixing Tan
Xiangwen Zhang
Shuo Wang
Yang Liu
VLM
LRM
213
52
0
13 Oct 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,858
0
18 Apr 2021
Making Pre-trained Language Models Better Few-shot Learners
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
243
1,924
0
31 Dec 2020
Pre-trained Models for Natural Language Processing: A Survey
Pre-trained Models for Natural Language Processing: A Survey
Xipeng Qiu
Tianxiang Sun
Yige Xu
Yunfan Shao
Ning Dai
Xuanjing Huang
LM&MA
VLM
243
1,452
0
18 Mar 2020
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
297
6,959
0
20 Apr 2018
1