ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.09685
  4. Cited By
LoRA: Low-Rank Adaptation of Large Language Models

LoRA: Low-Rank Adaptation of Large Language Models

17 June 2021
J. E. Hu
Yelong Shen
Phillip Wallis
Zeyuan Allen-Zhu
Yuanzhi Li
Shean Wang
Lu Wang
Weizhu Chen
    OffRL
    AI4TS
    AI4CE
    ALM
    AIMat
ArXivPDFHTML

Papers citing "LoRA: Low-Rank Adaptation of Large Language Models"

50 / 1,919 papers shown
Title
Towards Parameter-Efficient Integration of Pre-Trained Language Models
  In Temporal Video Grounding
Towards Parameter-Efficient Integration of Pre-Trained Language Models In Temporal Video Grounding
Erica K. Shimomoto
Edison Marrese-Taylor
Hiroya Takamura
Ichiro Kobayashi
Hideki Nakayama
Yusuke Miyao
27
7
0
26 Sep 2022
Automatic Label Sequence Generation for Prompting Sequence-to-sequence
  Models
Automatic Label Sequence Generation for Prompting Sequence-to-sequence Models
Zichun Yu
Tianyu Gao
Zhengyan Zhang
Yankai Lin
Zhiyuan Liu
Maosong Sun
Jie Zhou
VLM
LRM
38
1
0
20 Sep 2022
PANCETTA: Phoneme Aware Neural Completion to Elicit Tongue Twisters
  Automatically
PANCETTA: Phoneme Aware Neural Completion to Elicit Tongue Twisters Automatically
Sedrick Scott Keh
Steven Y. Feng
Varun Gangal
Malihe Alikhani
Eduard H. Hovy
23
4
0
13 Sep 2022
FETA: Towards Specializing Foundation Models for Expert Task
  Applications
FETA: Towards Specializing Foundation Models for Expert Task Applications
Amit Alfassy
Assaf Arbelle
Oshri Halimi
Sivan Harary
Roei Herzig
...
Christoph Auer
Kate Saenko
Peter W. J. Staar
Rogerio Feris
Leonid Karlinsky
23
19
0
08 Sep 2022
Efficient Methods for Natural Language Processing: A Survey
Efficient Methods for Natural Language Processing: A Survey
Marcos Vinícius Treviso
Ji-Ung Lee
Tianchu Ji
Betty van Aken
Qingqing Cao
...
Emma Strubell
Niranjan Balasubramanian
Leon Derczynski
Iryna Gurevych
Roy Schwartz
33
109
0
31 Aug 2022
PANDA: Prompt Transfer Meets Knowledge Distillation for Efficient Model
  Adaptation
PANDA: Prompt Transfer Meets Knowledge Distillation for Efficient Model Adaptation
Qihuang Zhong
Liang Ding
Juhua Liu
Bo Du
Dacheng Tao
VLM
CLL
34
41
0
22 Aug 2022
Conv-Adapter: Exploring Parameter Efficient Transfer Learning for
  ConvNets
Conv-Adapter: Exploring Parameter Efficient Transfer Learning for ConvNets
Hao Chen
R. Tao
Han Zhang
Yidong Wang
Xiang Li
Weirong Ye
Jindong Wang
Guosheng Hu
Marios Savvides
VPVLM
32
53
0
15 Aug 2022
Disentangled Modeling of Domain and Relevance for Adaptable Dense
  Retrieval
Disentangled Modeling of Domain and Relevance for Adaptable Dense Retrieval
Jingtao Zhan
Qingyao Ai
Yiqun Liu
Jiaxin Mao
Xiaohui Xie
Hao Fei
Shaoping Ma
43
10
0
11 Aug 2022
Frozen CLIP Models are Efficient Video Learners
Frozen CLIP Models are Efficient Video Learners
Ziyi Lin
Shijie Geng
Renrui Zhang
Peng Gao
Gerard de Melo
Xiaogang Wang
Jifeng Dai
Yu Qiao
Hongsheng Li
CLIP
VLM
23
200
0
06 Aug 2022
Training Large-Vocabulary Neural Language Models by Private Federated
  Learning for Resource-Constrained Devices
Training Large-Vocabulary Neural Language Models by Private Federated Learning for Resource-Constrained Devices
Mingbin Xu
Congzheng Song
Ye Tian
Neha Agrawal
Filip Granqvist
...
Shiyi Han
Yaqiao Deng
Leo Liu
Anmol Walia
Alex Jin
FedML
15
22
0
18 Jul 2022
Convolutional Bypasses Are Better Vision Transformer Adapters
Convolutional Bypasses Are Better Vision Transformer Adapters
Shibo Jie
Zhi-Hong Deng
VPVLM
21
132
0
14 Jul 2022
Meta-Learning the Difference: Preparing Large Language Models for
  Efficient Adaptation
Meta-Learning the Difference: Preparing Large Language Models for Efficient Adaptation
Zejiang Hou
Julian Salazar
George Polovets
30
14
0
07 Jul 2022
When Does Differentially Private Learning Not Suffer in High Dimensions?
When Does Differentially Private Learning Not Suffer in High Dimensions?
Xuechen Li
Daogao Liu
Tatsunori Hashimoto
Huseyin A. Inan
Janardhan Kulkarni
Y. Lee
Abhradeep Thakurta
36
58
0
01 Jul 2022
FiT: Parameter Efficient Few-shot Transfer Learning for Personalized and
  Federated Image Classification
FiT: Parameter Efficient Few-shot Transfer Learning for Personalized and Federated Image Classification
Aliaksandra Shysheya
J. Bronskill
Massimiliano Patacchiola
Sebastian Nowozin
Richard Turner
3DH
FedML
46
27
0
17 Jun 2022
Zero-Shot Video Question Answering via Frozen Bidirectional Language
  Models
Zero-Shot Video Question Answering via Frozen Bidirectional Language Models
Antoine Yang
Antoine Miech
Josef Sivic
Ivan Laptev
Cordelia Schmid
55
228
0
16 Jun 2022
Self-Generated In-Context Learning: Leveraging Auto-regressive Language
  Models as a Demonstration Generator
Self-Generated In-Context Learning: Leveraging Auto-regressive Language Models as a Demonstration Generator
Sungmin Cho
Hyunsoo Cho
Junyeob Kim
Taeuk Kim
Kang Min Yoo
Sang-goo Lee
33
61
0
16 Jun 2022
Sparse Structure Search for Parameter-Efficient Tuning
Sparse Structure Search for Parameter-Efficient Tuning
Shengding Hu
Zhen Zhang
Ning Ding
Yadao Wang
Yasheng Wang
Zhiyuan Liu
Maosong Sun
37
16
0
15 Jun 2022
Automatic Clipping: Differentially Private Deep Learning Made Easier and
  Stronger
Automatic Clipping: Differentially Private Deep Learning Made Easier and Stronger
Zhiqi Bu
Yu Wang
Sheng Zha
George Karypis
27
69
0
14 Jun 2022
LST: Ladder Side-Tuning for Parameter and Memory Efficient Transfer
  Learning
LST: Ladder Side-Tuning for Parameter and Memory Efficient Transfer Learning
Yi-Lin Sung
Jaemin Cho
Joey Tianyi Zhou
VLM
21
237
0
13 Jun 2022
Neural Prompt Search
Neural Prompt Search
Yuanhan Zhang
Kaiyang Zhou
Ziwei Liu
VPVLM
VLM
55
144
0
09 Jun 2022
Unveiling Transformers with LEGO: a synthetic reasoning task
Unveiling Transformers with LEGO: a synthetic reasoning task
Yi Zhang
A. Backurs
Sébastien Bubeck
Ronen Eldan
Suriya Gunasekar
Tal Wagner
LRM
36
85
0
09 Jun 2022
Parameter-Efficient and Student-Friendly Knowledge Distillation
Parameter-Efficient and Student-Friendly Knowledge Distillation
Jun Rao
Xv Meng
Liang Ding
Shuhan Qi
Dacheng Tao
37
46
0
28 May 2022
Can Foundation Models Help Us Achieve Perfect Secrecy?
Can Foundation Models Help Us Achieve Perfect Secrecy?
Simran Arora
Christopher Ré
FedML
24
6
0
27 May 2022
Ground-Truth Labels Matter: A Deeper Look into Input-Label
  Demonstrations
Ground-Truth Labels Matter: A Deeper Look into Input-Label Demonstrations
Kang Min Yoo
Junyeob Kim
Sungmin Cho
Hyunsoo Cho
Hwiyeol Jo
Sang-Woo Lee
Sang-goo Lee
Taeuk Kim
31
123
0
25 May 2022
Memorization in NLP Fine-tuning Methods
Memorization in NLP Fine-tuning Methods
Fatemehsadat Mireshghallah
Archit Uniyal
Tianhao Wang
David Evans
Taylor Berg-Kirkpatrick
AAML
67
39
0
25 May 2022
Know Where You're Going: Meta-Learning for Parameter-Efficient
  Fine-Tuning
Know Where You're Going: Meta-Learning for Parameter-Efficient Fine-Tuning
Mozhdeh Gheini
Xuezhe Ma
Jonathan May
59
5
0
25 May 2022
When does Parameter-Efficient Transfer Learning Work for Machine
  Translation?
When does Parameter-Efficient Transfer Learning Work for Machine Translation?
Ahmet Üstün
Asa Cooper Stickland
42
7
0
23 May 2022
BBTv2: Towards a Gradient-Free Future with Large Language Models
BBTv2: Towards a Gradient-Free Future with Large Language Models
Tianxiang Sun
Zhengfu He
Hong Qian
Yunhua Zhou
Xuanjing Huang
Xipeng Qiu
108
53
0
23 May 2022
A Unified and Biologically-Plausible Relational Graph Representation of
  Vision Transformers
A Unified and Biologically-Plausible Relational Graph Representation of Vision Transformers
Yuzhong Chen
Yu Du
Zhe Xiao
Lin Zhao
Lu Zhang
...
Dajiang Zhu
Tuo Zhang
Xintao Hu
Tianming Liu
Xi Jiang
ViT
27
5
0
20 May 2022
AdapterBias: Parameter-efficient Token-dependent Representation Shift
  for Adapters in NLP Tasks
AdapterBias: Parameter-efficient Token-dependent Representation Shift for Adapters in NLP Tasks
Chin-Lun Fu
Zih-Ching Chen
Yun-Ru Lee
Hung-yi Lee
33
44
0
30 Apr 2022
Building a Role Specified Open-Domain Dialogue System Leveraging
  Large-Scale Language Models
Building a Role Specified Open-Domain Dialogue System Leveraging Large-Scale Language Models
Sanghwan Bae
Donghyun Kwak
Sungdong Kim
Dong-hyun Ham
Soyoung Kang
Sang-Woo Lee
W. Park
ALM
30
37
0
30 Apr 2022
Prompt Consistency for Zero-Shot Task Generalization
Prompt Consistency for Zero-Shot Task Generalization
Chunting Zhou
Junxian He
Xuezhe Ma
Taylor Berg-Kirkpatrick
Graham Neubig
VLM
26
74
0
29 Apr 2022
TemporalWiki: A Lifelong Benchmark for Training and Evaluating
  Ever-Evolving Language Models
TemporalWiki: A Lifelong Benchmark for Training and Evaluating Ever-Evolving Language Models
Joel Jang
Seonghyeon Ye
Changho Lee
Sohee Yang
Joongbo Shin
Janghoon Han
Gyeonghun Kim
Minjoon Seo
CLL
KELM
27
93
0
29 Apr 2022
On the Effect of Pretraining Corpora on In-context Learning by a
  Large-scale Language Model
On the Effect of Pretraining Corpora on In-context Learning by a Large-scale Language Model
Seongjin Shin
Sang-Woo Lee
Hwijeen Ahn
Sungdong Kim
Hyoungseok Kim
...
Kyunghyun Cho
Gichang Lee
W. Park
Jung-Woo Ha
Nako Sung
LRM
38
94
0
28 Apr 2022
Plug-and-Play Adaptation for Continuously-updated QA
Plug-and-Play Adaptation for Continuously-updated QA
Kyungjae Lee
Wookje Han
Seung-won Hwang
Hwaran Lee
Joonsuk Park
Sang-Woo Lee
KELM
30
16
0
27 Apr 2022
A Contrastive Cross-Channel Data Augmentation Framework for Aspect-based
  Sentiment Analysis
A Contrastive Cross-Channel Data Augmentation Framework for Aspect-based Sentiment Analysis
Bing Wang
Liang Ding
Qihuang Zhong
Ximing Li
Dacheng Tao
34
32
0
16 Apr 2022
Impossible Triangle: What's Next for Pre-trained Language Models?
Impossible Triangle: What's Next for Pre-trained Language Models?
Chenguang Zhu
Michael Zeng
24
1
0
13 Apr 2022
DualPrompt: Complementary Prompting for Rehearsal-free Continual
  Learning
DualPrompt: Complementary Prompting for Rehearsal-free Continual Learning
Zifeng Wang
Zizhao Zhang
Sayna Ebrahimi
Ruoxi Sun
Han Zhang
...
Xiaoqi Ren
Guolong Su
Vincent Perot
Jennifer Dy
Tomas Pfister
CLL
VLM
VPVLM
36
463
0
10 Apr 2022
Parameter-Efficient Neural Reranking for Cross-Lingual and Multilingual
  Retrieval
Parameter-Efficient Neural Reranking for Cross-Lingual and Multilingual Retrieval
Robert Litschko
Ivan Vulić
Goran Glavaš
LRM
42
14
0
05 Apr 2022
Parameter-efficient Model Adaptation for Vision Transformers
Parameter-efficient Model Adaptation for Vision Transformers
Xuehai He
Chunyuan Li
Pengchuan Zhang
Jianwei Yang
Junfeng Fang
30
84
0
29 Mar 2022
A Scalable Model Specialization Framework for Training and Inference
  using Submodels and its Application to Speech Model Personalization
A Scalable Model Specialization Framework for Training and Inference using Submodels and its Application to Speech Model Personalization
Fadi Biadsy
Youzheng Chen
Xia Zhang
Oleg Rybakov
Andrew Rosenberg
Pedro J. Moreno
51
13
0
23 Mar 2022
Hyperdecoders: Instance-specific decoders for multi-task NLP
Hyperdecoders: Instance-specific decoders for multi-task NLP
Hamish Ivison
Matthew E. Peters
AI4CE
34
20
0
15 Mar 2022
Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for
  Pre-trained Language Models
Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for Pre-trained Language Models
Ning Ding
Yujia Qin
Guang Yang
Fu Wei
Zonghan Yang
...
Jianfei Chen
Yang Liu
Jie Tang
Juan Li
Maosong Sun
34
197
0
14 Mar 2022
Input-Tuning: Adapting Unfamiliar Inputs to Frozen Pretrained Models
Input-Tuning: Adapting Unfamiliar Inputs to Frozen Pretrained Models
Shengnan An
Yifei Li
Zeqi Lin
Qian Liu
Bei Chen
Qiang Fu
Weizhu Chen
Nanning Zheng
Jian-Guang Lou
VLM
AAML
42
40
0
07 Mar 2022
Combining Modular Skills in Multitask Learning
Combining Modular Skills in Multitask Learning
Edoardo Ponti
Alessandro Sordoni
Yoshua Bengio
Siva Reddy
MoE
17
37
0
28 Feb 2022
Revisiting Parameter-Efficient Tuning: Are We Really There Yet?
Revisiting Parameter-Efficient Tuning: Are We Really There Yet?
Guanzheng Chen
Fangyu Liu
Zaiqiao Meng
Shangsong Liang
26
88
0
16 Feb 2022
EdgeFormer: A Parameter-Efficient Transformer for On-Device Seq2seq
  Generation
EdgeFormer: A Parameter-Efficient Transformer for On-Device Seq2seq Generation
Tao Ge
Si-Qing Chen
Furu Wei
MoE
32
20
0
16 Feb 2022
CLIP-TD: CLIP Targeted Distillation for Vision-Language Tasks
CLIP-TD: CLIP Targeted Distillation for Vision-Language Tasks
Zhecan Wang
Noel Codella
Yen-Chun Chen
Luowei Zhou
Jianwei Yang
Xiyang Dai
Bin Xiao
Haoxuan You
Shih-Fu Chang
Lu Yuan
CLIP
VLM
22
39
0
15 Jan 2022
Black-Box Tuning for Language-Model-as-a-Service
Black-Box Tuning for Language-Model-as-a-Service
Tianxiang Sun
Yunfan Shao
Hong Qian
Xuanjing Huang
Xipeng Qiu
VLM
50
256
0
10 Jan 2022
Efficient Hierarchical Domain Adaptation for Pretrained Language Models
Efficient Hierarchical Domain Adaptation for Pretrained Language Models
Alexandra Chronopoulou
Matthew E. Peters
Jesse Dodge
33
42
0
16 Dec 2021
Previous
123...373839
Next