ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2111.06719
  4. Cited By
On Transferability of Prompt Tuning for Natural Language Processing

On Transferability of Prompt Tuning for Natural Language Processing

12 November 2021
Yusheng Su
Xiaozhi Wang
Yujia Qin
Chi-Min Chan
Yankai Lin
Huadong Wang
Kaiyue Wen
Zhiyuan Liu
Peng Li
Juanzi Li
Lei Hou
Maosong Sun
Jie Zhou
    AAML
    VLM
ArXivPDFHTML

Papers citing "On Transferability of Prompt Tuning for Natural Language Processing"

26 / 26 papers shown
Title
Learning Optimal Prompt Ensemble for Multi-source Visual Prompt Transfer
Learning Optimal Prompt Ensemble for Multi-source Visual Prompt Transfer
Enming Zhang
Liwen Cao
Yanru Wu
Zijie Zhao
Guan Wang
Yang Li
52
0
0
09 Apr 2025
Parameter-Efficient Fine-Tuning in Large Models: A Survey of Methodologies
Parameter-Efficient Fine-Tuning in Large Models: A Survey of Methodologies
L. Wang
Sheng Chen
Linnan Jiang
Shu Pan
Runze Cai
Sen Yang
Fei Yang
49
3
0
24 Oct 2024
PACE: Marrying generalization in PArameter-efficient fine-tuning with Consistency rEgularization
PACE: Marrying generalization in PArameter-efficient fine-tuning with Consistency rEgularization
Yao Ni
Shan Zhang
Piotr Koniusz
148
2
0
25 Sep 2024
Emergence of a High-Dimensional Abstraction Phase in Language Transformers
Emergence of a High-Dimensional Abstraction Phase in Language Transformers
Emily Cheng
Diego Doimo
Corentin Kervadec
Iuri Macocco
Jade Yu
A. Laio
Marco Baroni
112
11
0
24 May 2024
Distilling Reasoning Ability from Large Language Models with Adaptive
  Thinking
Distilling Reasoning Ability from Large Language Models with Adaptive Thinking
Xiao Chen
Sihang Zhou
K. Liang
Xinwang Liu
ReLM
LRM
37
2
0
14 Apr 2024
UniMEEC: Towards Unified Multimodal Emotion Recognition and Emotion
  Cause
UniMEEC: Towards Unified Multimodal Emotion Recognition and Emotion Cause
Guimin Hu
Zhihong Zhu
Daniel Hershcovich
Hasti Seifi
Jiayuan Xie
27
7
0
30 Mar 2024
A Scalable and Adaptive System to Infer the Industry Sectors of
  Companies: Prompt + Model Tuning of Generative Language Models
A Scalable and Adaptive System to Infer the Industry Sectors of Companies: Prompt + Model Tuning of Generative Language Models
Le-le Cao
Vilhelm von Ehrenheim
Astrid Berghult
Cecilia Henje
Richard Anselmo Stahl
Joar Wandborg
S. Stan
Armin Catovic
Erik Ferm
Hannes Ingelhag
14
4
0
05 Jun 2023
TaskWeb: Selecting Better Source Tasks for Multi-task NLP
TaskWeb: Selecting Better Source Tasks for Multi-task NLP
Joongwon Kim
Akari Asai
Gabriel Ilharco
Hannaneh Hajishirzi
29
11
0
22 May 2023
Black-box Prompt Tuning with Subspace Learning
Black-box Prompt Tuning with Subspace Learning
Yuanhang Zheng
Zhixing Tan
Peng Li
Yang Liu
VLM
51
9
0
04 May 2023
VPGTrans: Transfer Visual Prompt Generator across LLMs
VPGTrans: Transfer Visual Prompt Generator across LLMs
Ao Zhang
Hao Fei
Yuan Yao
Wei Ji
Li Li
Zhiyuan Liu
Tat-Seng Chua
MLLM
VLM
27
85
0
02 May 2023
Multimodal Grounding for Embodied AI via Augmented Reality Headsets for
  Natural Language Driven Task Planning
Multimodal Grounding for Embodied AI via Augmented Reality Headsets for Natural Language Driven Task Planning
Selma Wanna
Fabian Parra
R. Valner
Karl Kruusamäe
Mitch Pryor
LM&Ro
26
2
0
26 Apr 2023
Global Prompt Cell: A Portable Control Module for Effective Prompt
  Tuning
Global Prompt Cell: A Portable Control Module for Effective Prompt Tuning
Chi-Liang Liu
Hao Wang
Nuwa Xi
Sendong Zhao
Bing Qin
VLM
16
1
0
12 Apr 2023
Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning
Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning
Zhen Wang
Rameswar Panda
Leonid Karlinsky
Rogerio Feris
Huan Sun
Yoon Kim
VLM
VPVLM
21
107
0
06 Mar 2023
Can discrete information extraction prompts generalize across language
  models?
Can discrete information extraction prompts generalize across language models?
Nathanaël Carraz Rakotonirina
Roberto Dessì
Fabio Petroni
Sebastian Riedel
Marco Baroni
28
7
0
20 Feb 2023
LabelPrompt: Effective Prompt-based Learning for Relation Classification
LabelPrompt: Effective Prompt-based Learning for Relation Classification
Wenbo Zhang
Xiaoning Song
Zhenhua Feng
Tianyang Xu
Xiaojun Wu
VLM
35
4
0
16 Feb 2023
One Model for All Domains: Collaborative Domain-Prefix Tuning for
  Cross-Domain NER
One Model for All Domains: Collaborative Domain-Prefix Tuning for Cross-Domain NER
Xiang Chen
Lei Li
Q. Fei
Ningyu Zhang
Chuanqi Tan
Yong-jia Jiang
Fei Huang
Huajun Chen
26
23
0
25 Jan 2023
HyperTuning: Toward Adapting Large Language Models without
  Back-propagation
HyperTuning: Toward Adapting Large Language Models without Back-propagation
Jason Phang
Yi Mao
Pengcheng He
Weizhu Chen
16
30
0
22 Nov 2022
Evaluating Parameter Efficient Learning for Generation
Evaluating Parameter Efficient Learning for Generation
Peng-Tao Xu
M. Patwary
Shrimai Prabhumoye
Virginia Adams
R. Prenger
Ming-Yu Liu
Nayeon Lee
M. Shoeybi
Bryan Catanzaro
MoE
33
3
0
25 Oct 2022
Efficiently Enhancing Zero-Shot Performance of Instruction Following
  Model via Retrieval of Soft Prompt
Efficiently Enhancing Zero-Shot Performance of Instruction Following Model via Retrieval of Soft Prompt
Seonghyeon Ye
Joel Jang
Doyoung Kim
Yongrae Jo
Minjoon Seo
VLM
31
2
0
06 Oct 2022
PANDA: Prompt Transfer Meets Knowledge Distillation for Efficient Model
  Adaptation
PANDA: Prompt Transfer Meets Knowledge Distillation for Efficient Model Adaptation
Qihuang Zhong
Liang Ding
Juhua Liu
Bo Du
Dacheng Tao
VLM
CLL
29
41
0
22 Aug 2022
SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
Tu Vu
Brian Lester
Noah Constant
Rami Al-Rfou
Daniel Cer
VLM
LRM
137
277
0
15 Oct 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,848
0
18 Apr 2021
WARP: Word-level Adversarial ReProgramming
WARP: Word-level Adversarial ReProgramming
Karen Hambardzumyan
Hrant Khachatrian
Jonathan May
AAML
254
342
0
01 Jan 2021
Making Pre-trained Language Models Better Few-shot Learners
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
241
1,919
0
31 Dec 2020
Exploiting Cloze Questions for Few Shot Text Classification and Natural
  Language Inference
Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference
Timo Schick
Hinrich Schütze
258
1,589
0
21 Jan 2020
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
297
6,959
0
20 Apr 2018
1