ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.03164
  4. Cited By
On the Effectiveness of Adapter-based Tuning for Pretrained Language
  Model Adaptation

On the Effectiveness of Adapter-based Tuning for Pretrained Language Model Adaptation

6 June 2021
Ruidan He
Linlin Liu
Hai Ye
Qingyu Tan
Bosheng Ding
Liying Cheng
Jia-Wei Low
Lidong Bing
Luo Si
ArXivPDFHTML

Papers citing "On the Effectiveness of Adapter-based Tuning for Pretrained Language Model Adaptation"

39 / 39 papers shown
Title
IterIS: Iterative Inference-Solving Alignment for LoRA Merging
IterIS: Iterative Inference-Solving Alignment for LoRA Merging
Hongxu Chen
Runshi Li
Bowei Zhu
Zhen Wang
Long Chen
MoMe
98
0
0
21 Nov 2024
LoRA-Pro: Are Low-Rank Adapters Properly Optimized?
LoRA-Pro: Are Low-Rank Adapters Properly Optimized?
Zhengbo Wang
Jian Liang
Ran He
Zilei Wang
Tieniu Tan
55
15
0
25 Jul 2024
Compensate Quantization Errors+: Quantized Models Are Inquisitive Learners
Compensate Quantization Errors+: Quantized Models Are Inquisitive Learners
Yifei Gao
Jie Ou
Lei Wang
Fanhua Shang
Jaji Wu
MQ
47
0
0
22 Jul 2024
$\textit{Trans-LoRA}$: towards data-free Transferable Parameter
  Efficient Finetuning
Trans-LoRA\textit{Trans-LoRA}Trans-LoRA: towards data-free Transferable Parameter Efficient Finetuning
Runqian Wang
Soumya Ghosh
David D. Cox
Diego Antognini
Aude Oliva
Rogerio Feris
Leonid Karlinsky
32
1
0
27 May 2024
When LLMs Meet Cybersecurity: A Systematic Literature Review
When LLMs Meet Cybersecurity: A Systematic Literature Review
Jie Zhang
Haoyu Bu
Hui Wen
Yu Chen
Lun Li
Hongsong Zhu
42
36
0
06 May 2024
Let Your Graph Do the Talking: Encoding Structured Data for LLMs
Let Your Graph Do the Talking: Encoding Structured Data for LLMs
Bryan Perozzi
Bahare Fatemi
Dustin Zelle
Anton Tsitsulin
Mehran Kazemi
Rami Al-Rfou
Jonathan J. Halcrow
GNN
39
55
0
08 Feb 2024
A Comprehensive Evaluation of Parameter-Efficient Fine-Tuning on
  Software Engineering Tasks
A Comprehensive Evaluation of Parameter-Efficient Fine-Tuning on Software Engineering Tasks
Wentao Zou
Qi Li
Jidong Ge
Chuanyi Li
Xiaoyu Shen
LiGuo Huang
Bin Luo
24
5
0
25 Dec 2023
PrivateLoRA For Efficient Privacy Preserving LLM
PrivateLoRA For Efficient Privacy Preserving LLM
Yiming Wang
Yu Lin
Xiaodong Zeng
Guannan Zhang
47
11
0
23 Nov 2023
Efficient Domain Adaptation of Sentence Embeddings Using Adapters
Efficient Domain Adaptation of Sentence Embeddings Using Adapters
Tim Schopf
Dennis Schneider
Florian Matthes
28
5
0
06 Jul 2023
Harnessing the Power of Adversarial Prompting and Large Language Models
  for Robust Hypothesis Generation in Astronomy
Harnessing the Power of Adversarial Prompting and Large Language Models for Robust Hypothesis Generation in Astronomy
I. Ciucă
Y. Ting 丁
Sandor Kruk
K. Iyer
19
7
0
20 Jun 2023
Measuring the Robustness of NLP Models to Domain Shifts
Measuring the Robustness of NLP Models to Domain Shifts
Nitay Calderon
Naveh Porat
Eyal Ben-David
Alexander Chapanin
Zorik Gekhman
Nadav Oved
Vitaly Shalumov
Roi Reichart
21
7
0
31 May 2023
AdapterEM: Pre-trained Language Model Adaptation for Generalized Entity
  Matching using Adapter-tuning
AdapterEM: Pre-trained Language Model Adaptation for Generalized Entity Matching using Adapter-tuning
John Bosco Mugeni
S. Lynden
Toshiyuki Amagasa
Akiyoshi Matono
20
2
0
30 May 2023
On Robustness of Finetuned Transformer-based NLP Models
On Robustness of Finetuned Transformer-based NLP Models
Pavan Kalyan Reddy Neerudu
S. Oota
Mounika Marreddy
Venkateswara Rao Kagita
Manish Gupta
29
7
0
23 May 2023
TADA: Efficient Task-Agnostic Domain Adaptation for Transformers
TADA: Efficient Task-Agnostic Domain Adaptation for Transformers
Chia-Chien Hung
Lukas Lange
Jannik Strötgen
30
9
0
22 May 2023
Modular Deep Learning
Modular Deep Learning
Jonas Pfeiffer
Sebastian Ruder
Ivan Vulić
E. Ponti
MoMe
OOD
32
73
0
22 Feb 2023
A Stability Analysis of Fine-Tuning a Pre-Trained Model
A Stability Analysis of Fine-Tuning a Pre-Trained Model
Z. Fu
Anthony Man-Cho So
Nigel Collier
23
3
0
24 Jan 2023
Is GPT-3 a Good Data Annotator?
Is GPT-3 a Good Data Annotator?
Bosheng Ding
Chengwei Qin
Linlin Liu
Yew Ken Chia
Shafiq R. Joty
Boyang Albert Li
Lidong Bing
24
233
0
20 Dec 2022
CHAPTER: Exploiting Convolutional Neural Network Adapters for
  Self-supervised Speech Models
CHAPTER: Exploiting Convolutional Neural Network Adapters for Self-supervised Speech Models
Zih-Ching Chen
Yu-Shun Sung
Hung-yi Lee
26
16
0
01 Dec 2022
AF Adapter: Continual Pretraining for Building Chinese Biomedical
  Language Model
AF Adapter: Continual Pretraining for Building Chinese Biomedical Language Model
Yongyu Yan
Kui Xue
Xiaoming Shi
Qi Ye
Jingping Liu
Tong Ruan
CLL
42
1
0
21 Nov 2022
Towards Robust Low-Resource Fine-Tuning with Multi-View Compressed
  Representations
Towards Robust Low-Resource Fine-Tuning with Multi-View Compressed Representations
Linlin Liu
Xingxuan Li
Megh Thakkar
Xin Li
Shafiq R. Joty
Luo Si
Lidong Bing
27
2
0
16 Nov 2022
Parameter-Efficient Tuning on Layer Normalization for Pre-trained
  Language Models
Parameter-Efficient Tuning on Layer Normalization for Pre-trained Language Models
Wang Qi
Yu-Ping Ruan
Y. Zuo
Taihao Li
11
18
0
16 Nov 2022
Learning Better Intent Representations for Financial Open Intent
  Classification
Learning Better Intent Representations for Financial Open Intent Classification
Xianzhi Li
Will Aitken
Xiao-Dan Zhu
Stephen W. Thomas
AIFin
11
8
0
25 Oct 2022
Evaluating Parameter Efficient Learning for Generation
Evaluating Parameter Efficient Learning for Generation
Peng-Tao Xu
M. Patwary
Shrimai Prabhumoye
Virginia Adams
R. Prenger
Ming-Yu Liu
Nayeon Lee
M. Shoeybi
Bryan Catanzaro
MoE
33
3
0
25 Oct 2022
MiniALBERT: Model Distillation via Parameter-Efficient Recursive
  Transformers
MiniALBERT: Model Distillation via Parameter-Efficient Recursive Transformers
Mohammadmahdi Nouriborji
Omid Rohanian
Samaneh Kouchaki
David A. Clifton
26
8
0
12 Oct 2022
Sparse Structure Search for Parameter-Efficient Tuning
Sparse Structure Search for Parameter-Efficient Tuning
Shengding Hu
Zhen Zhang
Ning Ding
Yadao Wang
Yasheng Wang
Zhiyuan Liu
Maosong Sun
31
16
0
15 Jun 2022
Efficient Few-Shot Fine-Tuning for Opinion Summarization
Efficient Few-Shot Fine-Tuning for Opinion Summarization
Arthur Bravzinskas
Ramesh Nallapati
Joey Tianyi Zhou
Markus Dreyer
19
24
0
04 May 2022
Adaptable Adapters
Adaptable Adapters
N. Moosavi
Quentin Delfosse
Kristian Kersting
Iryna Gurevych
48
21
0
03 May 2022
IDPG: An Instance-Dependent Prompt Generation Method
IDPG: An Instance-Dependent Prompt Generation Method
Zhuofeng Wu
Sinong Wang
Jiatao Gu
Rui Hou
Yuxiao Dong
V. Vydiswaran
Hao Ma
VLM
30
58
0
09 Apr 2022
Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for
  Pre-trained Language Models
Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for Pre-trained Language Models
Ning Ding
Yujia Qin
Guang Yang
Fu Wei
Zonghan Yang
...
Jianfei Chen
Yang Liu
Jie Tang
Juan Li
Maosong Sun
32
196
0
14 Mar 2022
ELLE: Efficient Lifelong Pre-training for Emerging Data
ELLE: Efficient Lifelong Pre-training for Emerging Data
Yujia Qin
Jiajie Zhang
Yankai Lin
Zhiyuan Liu
Peng Li
Maosong Sun
Jie Zhou
27
67
0
12 Mar 2022
Input-Tuning: Adapting Unfamiliar Inputs to Frozen Pretrained Models
Input-Tuning: Adapting Unfamiliar Inputs to Frozen Pretrained Models
Shengnan An
Yifei Li
Zeqi Lin
Qian Liu
Bei Chen
Qiang Fu
Weizhu Chen
Nanning Zheng
Jian-Guang Lou
VLM
AAML
36
39
0
07 Mar 2022
Revisiting Parameter-Efficient Tuning: Are We Really There Yet?
Revisiting Parameter-Efficient Tuning: Are We Really There Yet?
Guanzheng Chen
Fangyu Liu
Zaiqiao Meng
Shangsong Liang
26
88
0
16 Feb 2022
TransLog: A Unified Transformer-based Framework for Log Anomaly
  Detection
TransLog: A Unified Transformer-based Framework for Log Anomaly Detection
Hongcheng Guo
Xin-Xue Lin
Jian Yang
Yi Zhuang
Jiaqi Bai
Tieqiao Zheng
Bo Zhang
Zhoujun Li
24
19
0
31 Dec 2021
Enhancing Multilingual Language Model with Massive Multilingual
  Knowledge Triples
Enhancing Multilingual Language Model with Massive Multilingual Knowledge Triples
Linlin Liu
Xin Li
Ruidan He
Lidong Bing
Shafiq R. Joty
Luo Si
KELM
37
18
0
22 Nov 2021
Semi-Siamese Bi-encoder Neural Ranking Model Using Lightweight
  Fine-Tuning
Semi-Siamese Bi-encoder Neural Ranking Model Using Lightweight Fine-Tuning
Euna Jung
Jaekeol Choi
Wonjong Rhee
17
13
0
28 Oct 2021
Robust Transfer Learning with Pretrained Language Models through
  Adapters
Robust Transfer Learning with Pretrained Language Models through Adapters
Wenjuan Han
Bo Pang
Ying Nian Wu
16
54
0
05 Aug 2021
FreeLB: Enhanced Adversarial Training for Natural Language Understanding
FreeLB: Enhanced Adversarial Training for Natural Language Understanding
Chen Zhu
Yu Cheng
Zhe Gan
S. Sun
Tom Goldstein
Jingjing Liu
AAML
223
438
0
25 Sep 2019
Mixout: Effective Regularization to Finetune Large-scale Pretrained
  Language Models
Mixout: Effective Regularization to Finetune Large-scale Pretrained Language Models
Cheolhyoung Lee
Kyunghyun Cho
Wanmo Kang
MoE
249
205
0
25 Sep 2019
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
297
6,959
0
20 Apr 2018
1