ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.09113
  4. Cited By
AutoLoRA: Automatically Tuning Matrix Ranks in Low-Rank Adaptation Based
  on Meta Learning

AutoLoRA: Automatically Tuning Matrix Ranks in Low-Rank Adaptation Based on Meta Learning

14 March 2024
Ruiyi Zhang
Rushi Qiang
Sai Ashish Somayajula
Pengtao Xie
ArXivPDFHTML

Papers citing "AutoLoRA: Automatically Tuning Matrix Ranks in Low-Rank Adaptation Based on Meta Learning"

13 / 13 papers shown
Title
DeLoRA: Decoupling Angles and Strength in Low-rank Adaptation
DeLoRA: Decoupling Angles and Strength in Low-rank Adaptation
Massimo Bini
Leander Girrbach
Zeynep Akata
40
0
0
23 Mar 2025
RandLoRA: Full-rank parameter-efficient fine-tuning of large models
RandLoRA: Full-rank parameter-efficient fine-tuning of large models
Paul Albert
Frederic Z. Zhang
Hemanth Saratchandran
Cristian Rodriguez-Opazo
Anton van den Hengel
Ehsan Abbasnejad
94
0
0
03 Feb 2025
Transfer Learning for Finetuning Large Language Models
Transfer Learning for Finetuning Large Language Models
Tobias Strangmann
Lennart Purucker
Jörg K.H. Franke
Ivo Rapant
Fabio Ferreira
Frank Hutter
44
0
0
02 Nov 2024
BiDoRA: Bi-level Optimization-Based Weight-Decomposed Low-Rank
  Adaptation
BiDoRA: Bi-level Optimization-Based Weight-Decomposed Low-Rank Adaptation
Peijia Qin
Ruiyi Zhang
Pengtao Xie
28
1
0
13 Oct 2024
Criticality Leveraged Adversarial Training (CLAT) for Boosted
  Performance via Parameter Efficiency
Criticality Leveraged Adversarial Training (CLAT) for Boosted Performance via Parameter Efficiency
Bhavna Gopal
Huanrui Yang
Jingyang Zhang
Mark Horton
Yiran Chen
AAML
27
0
0
19 Aug 2024
Structure-Preserving Network Compression Via Low-Rank Induced Training
  Through Linear Layers Composition
Structure-Preserving Network Compression Via Low-Rank Induced Training Through Linear Layers Composition
Xitong Zhang
Ismail R. Alkhouri
Rongrong Wang
38
0
0
06 May 2024
ReFT: Representation Finetuning for Language Models
ReFT: Representation Finetuning for Language Models
Zhengxuan Wu
Aryaman Arora
Zheng Wang
Atticus Geiger
Daniel Jurafsky
Christopher D. Manning
Christopher Potts
OffRL
35
1
0
04 Apr 2024
Parameter-Efficient Fine-Tuning for Large Models: A Comprehensive Survey
Parameter-Efficient Fine-Tuning for Large Models: A Comprehensive Survey
Zeyu Han
Chao Gao
Jinyang Liu
Jeff Zhang
Sai Qian Zhang
150
310
0
21 Mar 2024
SMAC3: A Versatile Bayesian Optimization Package for Hyperparameter
  Optimization
SMAC3: A Versatile Bayesian Optimization Package for Hyperparameter Optimization
Marius Lindauer
Katharina Eggensperger
Matthias Feurer
André Biedenkapp
Difan Deng
C. Benjamins
Tim Ruhopf
René Sass
Frank Hutter
85
327
0
20 Sep 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,848
0
18 Apr 2021
Beyond Fully-Connected Layers with Quaternions: Parameterization of
  Hypercomplex Multiplications with $1/n$ Parameters
Beyond Fully-Connected Layers with Quaternions: Parameterization of Hypercomplex Multiplications with 1/n1/n1/n Parameters
Aston Zhang
Yi Tay
Shuai Zhang
Alvin Chan
A. Luu
S. Hui
Jie Fu
MQ
176
83
0
17 Feb 2021
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
297
6,959
0
20 Apr 2018
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
332
11,684
0
09 Mar 2017
1