ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2203.06904
  4. Cited By
Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for
  Pre-trained Language Models

Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for Pre-trained Language Models

14 March 2022
Ning Ding
Yujia Qin
Guang Yang
Fu Wei
Zonghan Yang
Yusheng Su
Shengding Hu
Yulin Chen
Chi-Min Chan
Weize Chen
Jing Yi
Weilin Zhao
Xiaozhi Wang
Zhiyuan Liu
Haitao Zheng
Jianfei Chen
Yang Liu
Jie Tang
Juan Li
Maosong Sun
ArXivPDFHTML

Papers citing "Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for Pre-trained Language Models"

50 / 155 papers shown
Title
Continual Diffusion: Continual Customization of Text-to-Image Diffusion
  with C-LoRA
Continual Diffusion: Continual Customization of Text-to-Image Diffusion with C-LoRA
James Smith
Yen-Chang Hsu
Lingyu Zhang
Ting Hua
Z. Kira
Yilin Shen
Hongxia Jin
DiffM
131
95
0
12 Apr 2023
Rethinking Dense Retrieval's Few-Shot Ability
Rethinking Dense Retrieval's Few-Shot Ability
Si Sun
Yi-Wen Lu
Shi Yu
Xiangyang Li
Zhonghua Li
Zhao Cao
Zhiyuan Liu
Deiming Ye
Jie Bao
8
0
0
12 Apr 2023
Troika: Multi-Path Cross-Modal Traction for Compositional Zero-Shot
  Learning
Troika: Multi-Path Cross-Modal Traction for Compositional Zero-Shot Learning
Siteng Huang
Biao Gong
Yutong Feng
Min Zhang
Yiliang Lv
Donglin Wang
CoGe
32
10
0
27 Mar 2023
Parameter-Efficient Sparse Retrievers and Rerankers using Adapters
Parameter-Efficient Sparse Retrievers and Rerankers using Adapters
Vaishali Pal
Carlos Lassance
Hervé Déjean
S. Clinchant
135
3
0
23 Mar 2023
SPDF: Sparse Pre-training and Dense Fine-tuning for Large Language
  Models
SPDF: Sparse Pre-training and Dense Fine-tuning for Large Language Models
Vithursan Thangarasa
Abhay Gupta
William Marshall
Tianda Li
Kevin Leong
D. DeCoste
Sean Lie
Shreyas Saxena
MoE
AI4CE
16
18
0
18 Mar 2023
Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning
Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning
Zhen Wang
Rameswar Panda
Leonid Karlinsky
Rogerio Feris
Huan Sun
Yoon Kim
VLM
VPVLM
17
107
0
06 Mar 2023
SMoA: Sparse Mixture of Adapters to Mitigate Multiple Dataset Biases
SMoA: Sparse Mixture of Adapters to Mitigate Multiple Dataset Biases
Yanchen Liu
Jing Yang
Yan Chen
Jing Liu
Huaqin Wu
MoE
47
2
0
28 Feb 2023
How Does In-Context Learning Help Prompt Tuning?
How Does In-Context Learning Help Prompt Tuning?
Simeng Sun
Yang Liu
Dan Iter
Chenguang Zhu
Mohit Iyyer
VLM
35
17
0
22 Feb 2023
Complex QA and language models hybrid architectures, Survey
Complex QA and language models hybrid architectures, Survey
Xavier Daull
P. Bellot
Emmanuel Bruno
Vincent Martin
Elisabeth Murisasco
ELM
28
15
0
17 Feb 2023
Differentiable Entailment for Parameter Efficient Few Shot Learning
Differentiable Entailment for Parameter Efficient Few Shot Learning
Ethan Kim
Jerry Yang
21
0
0
31 Jan 2023
One Model for All Domains: Collaborative Domain-Prefix Tuning for
  Cross-Domain NER
One Model for All Domains: Collaborative Domain-Prefix Tuning for Cross-Domain NER
Xiang Chen
Lei Li
Q. Fei
Ningyu Zhang
Chuanqi Tan
Yong-jia Jiang
Fei Huang
Huajun Chen
26
23
0
25 Jan 2023
Parameter-Efficient Fine-Tuning Design Spaces
Parameter-Efficient Fine-Tuning Design Spaces
Jiaao Chen
Aston Zhang
Xingjian Shi
Mu Li
Alexander J. Smola
Diyi Yang
31
59
0
04 Jan 2023
When Federated Learning Meets Pre-trained Language Models'
  Parameter-Efficient Tuning Methods
When Federated Learning Meets Pre-trained Language Models' Parameter-Efficient Tuning Methods
Zhuo Zhang
Yuanhang Yang
Yong Dai
Lizhen Qu
Zenglin Xu
FedML
38
65
0
20 Dec 2022
Reasoning with Language Model Prompting: A Survey
Reasoning with Language Model Prompting: A Survey
Shuofei Qiao
Yixin Ou
Ningyu Zhang
Xiang Chen
Yunzhi Yao
Shumin Deng
Chuanqi Tan
Fei Huang
Huajun Chen
ReLM
ELM
LRM
68
311
0
19 Dec 2022
Decoder Tuning: Efficient Language Understanding as Decoding
Decoder Tuning: Efficient Language Understanding as Decoding
Ganqu Cui
Wentao Li
Ning Ding
Longtao Huang
Zhiyuan Liu
Maosong Sun
21
6
0
16 Dec 2022
SPARTAN: Sparse Hierarchical Memory for Parameter-Efficient Transformers
SPARTAN: Sparse Hierarchical Memory for Parameter-Efficient Transformers
A. Deshpande
Md Arafat Sultan
Anthony Ferritto
A. Kalyan
Karthik Narasimhan
Avirup Sil
MoE
33
1
0
29 Nov 2022
On the Effectiveness of Parameter-Efficient Fine-Tuning
On the Effectiveness of Parameter-Efficient Fine-Tuning
Z. Fu
Haoran Yang
Anthony Man-Cho So
Wai Lam
Lidong Bing
Nigel Collier
19
156
0
28 Nov 2022
HyperTuning: Toward Adapting Large Language Models without
  Back-propagation
HyperTuning: Toward Adapting Large Language Models without Back-propagation
Jason Phang
Yi Mao
Pengcheng He
Weizhu Chen
14
30
0
22 Nov 2022
ConStruct-VL: Data-Free Continual Structured VL Concepts Learning
ConStruct-VL: Data-Free Continual Structured VL Concepts Learning
James Smith
Paola Cascante-Bonilla
Assaf Arbelle
Donghyun Kim
Rameswar Panda
David D. Cox
Diyi Yang
Z. Kira
Rogerio Feris
Leonid Karlinsky
VLM
39
20
0
17 Nov 2022
FPT: Improving Prompt Tuning Efficiency via Progressive Training
FPT: Improving Prompt Tuning Efficiency via Progressive Training
Yufei Huang
Yujia Qin
Huadong Wang
Yichun Yin
Maosong Sun
Zhiyuan Liu
Qun Liu
VLM
LRM
27
6
0
13 Nov 2022
On the Domain Adaptation and Generalization of Pretrained Language
  Models: A Survey
On the Domain Adaptation and Generalization of Pretrained Language Models: A Survey
Xu Guo
Han Yu
LM&MA
VLM
28
29
0
06 Nov 2022
A Close Look into the Calibration of Pre-trained Language Models
A Close Look into the Calibration of Pre-trained Language Models
Yangyi Chen
Lifan Yuan
Ganqu Cui
Zhiyuan Liu
Heng Ji
25
43
0
31 Oct 2022
Parameter-Efficient Tuning Makes a Good Classification Head
Parameter-Efficient Tuning Makes a Good Classification Head
Zhuoyi Yang
Ming Ding
Yanhui Guo
Qingsong Lv
Jie Tang
VLM
37
14
0
30 Oct 2022
Exploring Mode Connectivity for Pre-trained Language Models
Exploring Mode Connectivity for Pre-trained Language Models
Yujia Qin
Cheng Qian
Jing Yi
Weize Chen
Yankai Lin
Xu Han
Zhiyuan Liu
Maosong Sun
Jie Zhou
29
20
0
25 Oct 2022
Evaluating Parameter Efficient Learning for Generation
Evaluating Parameter Efficient Learning for Generation
Peng-Tao Xu
M. Patwary
Shrimai Prabhumoye
Virginia Adams
R. Prenger
Wei Ping
Nayeon Lee
M. Shoeybi
Bryan Catanzaro
MoE
33
3
0
25 Oct 2022
Different Tunes Played with Equal Skill: Exploring a Unified
  Optimization Subspace for Delta Tuning
Different Tunes Played with Equal Skill: Exploring a Unified Optimization Subspace for Delta Tuning
Jing Yi
Weize Chen
Yujia Qin
Yankai Lin
Ning Ding
Xu Han
Zhiyuan Liu
Maosong Sun
Jie Zhou
15
2
0
24 Oct 2022
Late Prompt Tuning: A Late Prompt Could Be Better Than Many Prompts
Late Prompt Tuning: A Late Prompt Could Be Better Than Many Prompts
Xiangyang Liu
Tianxiang Sun
Xuanjing Huang
Xipeng Qiu
VLM
36
27
0
20 Oct 2022
Multitask Pre-training of Modular Prompt for Chinese Few-Shot Learning
Multitask Pre-training of Modular Prompt for Chinese Few-Shot Learning
Tianxiang Sun
Zhengfu He
Qinen Zhu
Xipeng Qiu
Xuanjing Huang
VLM
VPVLM
12
20
0
14 Oct 2022
Unified Detoxifying and Debiasing in Language Generation via
  Inference-time Adaptive Optimization
Unified Detoxifying and Debiasing in Language Generation via Inference-time Adaptive Optimization
Zonghan Yang
Xiaoyuan Yi
Peng Li
Yang Liu
Xing Xie
25
33
0
10 Oct 2022
XPrompt: Exploring the Extreme of Prompt Tuning
XPrompt: Exploring the Extreme of Prompt Tuning
Fang Ma
Chen Zhang
Lei Ren
Jingang Wang
Qifan Wang
Wei Yu Wu
Xiaojun Quan
Dawei Song
VLM
110
37
0
10 Oct 2022
Parameter-Efficient Tuning with Special Token Adaptation
Parameter-Efficient Tuning with Special Token Adaptation
Xiaoocong Yang
James Y. Huang
Wenxuan Zhou
Muhao Chen
26
12
0
10 Oct 2022
Improving the Sample Efficiency of Prompt Tuning with Domain Adaptation
Improving the Sample Efficiency of Prompt Tuning with Domain Adaptation
Xu Guo
Boyang Albert Li
Han Yu
VLM
39
22
0
06 Oct 2022
Universal Prompt Tuning for Graph Neural Networks
Universal Prompt Tuning for Graph Neural Networks
Taoran Fang
Yunchao Zhang
Yang Yang
Chunping Wang
Lei Chen
24
47
0
30 Sep 2022
Transformers with Learnable Activation Functions
Transformers with Learnable Activation Functions
Haishuo Fang
Ji-Ung Lee
N. Moosavi
Iryna Gurevych
AI4CE
25
7
0
30 Aug 2022
Disentangled Modeling of Domain and Relevance for Adaptable Dense
  Retrieval
Disentangled Modeling of Domain and Relevance for Adaptable Dense Retrieval
Jingtao Zhan
Qingyao Ai
Yiqun Liu
Jiaxin Mao
Xiaohui Xie
M. Zhang
Shaoping Ma
30
10
0
11 Aug 2022
Improving Task Generalization via Unified Schema Prompt
Improving Task Generalization via Unified Schema Prompt
Wanjun Zhong
Yifan Gao
Ning Ding
Zhiyuan Liu
Ming Zhou
Jiahai Wang
Jian Yin
Nan Duan
27
8
0
05 Aug 2022
Embedding Recycling for Language Models
Embedding Recycling for Language Models
Jon Saad-Falcon
Amanpreet Singh
Luca Soldaini
Mike DÁrcy
Arman Cohan
Doug Downey
KELM
13
4
0
11 Jul 2022
A Unified Evaluation of Textual Backdoor Learning: Frameworks and
  Benchmarks
A Unified Evaluation of Textual Backdoor Learning: Frameworks and Benchmarks
Ganqu Cui
Lifan Yuan
Bingxiang He
Yangyi Chen
Zhiyuan Liu
Maosong Sun
AAML
ELM
SILM
24
68
0
17 Jun 2022
Sparse Structure Search for Parameter-Efficient Tuning
Sparse Structure Search for Parameter-Efficient Tuning
Shengding Hu
Zhen Zhang
Ning Ding
Yadao Wang
Yasheng Wang
Zhiyuan Liu
Maosong Sun
29
16
0
15 Jun 2022
RLPrompt: Optimizing Discrete Text Prompts with Reinforcement Learning
RLPrompt: Optimizing Discrete Text Prompts with Reinforcement Learning
Mingkai Deng
Jianyu Wang
Cheng-Ping Hsieh
Yihan Wang
Han Guo
Tianmin Shu
Meng Song
Eric P. Xing
Zhiting Hu
21
318
0
25 May 2022
BBTv2: Towards a Gradient-Free Future with Large Language Models
BBTv2: Towards a Gradient-Free Future with Large Language Models
Tianxiang Sun
Zhengfu He
Hong Qian
Yunhua Zhou
Xuanjing Huang
Xipeng Qiu
108
53
0
23 May 2022
AdaVAE: Exploring Adaptive GPT-2s in Variational Auto-Encoders for
  Language Modeling
AdaVAE: Exploring Adaptive GPT-2s in Variational Auto-Encoders for Language Modeling
Haoqin Tu
Zhongliang Yang
Jinshuai Yang
Yong Huang
15
12
0
12 May 2022
Revisiting Parameter-Efficient Tuning: Are We Really There Yet?
Revisiting Parameter-Efficient Tuning: Are We Really There Yet?
Guanzheng Chen
Fangyu Liu
Zaiqiao Meng
Shangsong Liang
26
88
0
16 Feb 2022
OpenPrompt: An Open-source Framework for Prompt-learning
OpenPrompt: An Open-source Framework for Prompt-learning
Ning Ding
Shengding Hu
Weilin Zhao
Yulin Chen
Zhiyuan Liu
Haitao Zheng
Maosong Sun
VLM
LLMAG
23
284
0
03 Nov 2021
SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
Tu Vu
Brian Lester
Noah Constant
Rami Al-Rfou
Daniel Matthew Cer
VLM
LRM
137
277
0
15 Oct 2021
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally
  Across Scales and Tasks
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Xiao Liu
Kaixuan Ji
Yicheng Fu
Weng Lam Tam
Zhengxiao Du
Zhilin Yang
Jie Tang
VLM
238
806
0
14 Oct 2021
MSP: Multi-Stage Prompting for Making Pre-trained Language Models Better
  Translators
MSP: Multi-Stage Prompting for Making Pre-trained Language Models Better Translators
Zhixing Tan
Xiangwen Zhang
Shuo Wang
Yang Liu
VLM
LRM
213
52
0
13 Oct 2021
Single-dataset Experts for Multi-dataset Question Answering
Single-dataset Experts for Multi-dataset Question Answering
Dan Friedman
Ben Dodge
Danqi Chen
MoMe
132
26
0
28 Sep 2021
Paradigm Shift in Natural Language Processing
Paradigm Shift in Natural Language Processing
Tianxiang Sun
Xiangyang Liu
Xipeng Qiu
Xuanjing Huang
118
82
0
26 Sep 2021
CrossFit: A Few-shot Learning Challenge for Cross-task Generalization in
  NLP
CrossFit: A Few-shot Learning Challenge for Cross-task Generalization in NLP
Qinyuan Ye
Bill Yuchen Lin
Xiang Ren
211
179
0
18 Apr 2021
Previous
1234
Next