ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2104.08691
  4. Cited By
The Power of Scale for Parameter-Efficient Prompt Tuning
v1v2 (latest)

The Power of Scale for Parameter-Efficient Prompt Tuning

18 April 2021
Brian Lester
Rami Al-Rfou
Noah Constant
    VPVLM
ArXiv (abs)PDFHTML

Papers citing "The Power of Scale for Parameter-Efficient Prompt Tuning"

50 / 2,607 papers shown
Title
Protum: A New Method For Prompt Tuning Based on "[MASK]"
Protum: A New Method For Prompt Tuning Based on "[MASK]"
Pan He
Yuxi Chen
Yan Wang
Yanru Zhang
AAML
46
3
0
28 Jan 2022
Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A
  Large-Scale Generative Language Model
Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model
Shaden Smith
M. Patwary
Brandon Norick
P. LeGresley
Samyam Rajbhandari
...
Mohammad Shoeybi
Yuxiong He
Michael Houston
Saurabh Tiwary
Bryan Catanzaro
MoE
165
744
0
28 Jan 2022
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Jason W. Wei
Xuezhi Wang
Dale Schuurmans
Maarten Bosma
Brian Ichter
F. Xia
Ed H. Chi
Quoc Le
Denny Zhou
LM&RoLRMAI4CEReLM
1.0K
9,796
0
28 Jan 2022
Learning to Compose Diversified Prompts for Image Emotion Classification
Learning to Compose Diversified Prompts for Image Emotion Classification
Sinuo Deng
Lifang Wu
Ge Shi
Lehao Xing
Meng Jian
Ye Xiang
CLIPVLM
57
31
0
26 Jan 2022
Context-Tuning: Learning Contextualized Prompts for Natural Language
  Generation
Context-Tuning: Learning Contextualized Prompts for Natural Language Generation
Tianyi Tang
Junyi Li
Wayne Xin Zhao
Ji-Rong Wen
83
17
0
21 Jan 2022
Black-box Prompt Learning for Pre-trained Language Models
Black-box Prompt Learning for Pre-trained Language Models
Shizhe Diao
Zhichao Huang
Ruijia Xu
Xuechun Li
Yong Lin
Xiao Zhou
Tong Zhang
VLMAAML
102
71
0
21 Jan 2022
Instance-aware Prompt Learning for Language Understanding and Generation
Instance-aware Prompt Learning for Language Understanding and Generation
Feihu Jin
Jinliang Lu
Jiajun Zhang
Chengqing Zong
57
33
0
18 Jan 2022
What Makes the Story Forward? Inferring Commonsense Explanations as
  Prompts for Future Event Generation
What Makes the Story Forward? Inferring Commonsense Explanations as Prompts for Future Event Generation
Li Lin
Yixin Cao
Lifu Huang
Shuang Li
Xuming Hu
Lijie Wen
Jianmin Wang
AI4TS
79
16
0
18 Jan 2022
ZeroPrompt: Scaling Prompt-Based Pretraining to 1,000 Tasks Improves
  Zero-Shot Generalization
ZeroPrompt: Scaling Prompt-Based Pretraining to 1,000 Tasks Improves Zero-Shot Generalization
Hanwei Xu
Yujun Chen
Yulun Du
Nan Shao
Yanggang Wang
Haiyu Li
Zhilin Yang
VLMLRMAI4CE
77
69
0
18 Jan 2022
UnifiedSKG: Unifying and Multi-Tasking Structured Knowledge Grounding
  with Text-to-Text Language Models
UnifiedSKG: Unifying and Multi-Tasking Structured Knowledge Grounding with Text-to-Text Language Models
Tianbao Xie
Chen Henry Wu
Peng Shi
Ruiqi Zhong
Torsten Scholak
...
Lingpeng Kong
Rui Zhang
Noah A. Smith
Luke Zettlemoyer
Tao Yu
LMTD
123
304
0
16 Jan 2022
A Survey of Controllable Text Generation using Transformer-based
  Pre-trained Language Models
A Survey of Controllable Text Generation using Transformer-based Pre-trained Language Models
Hanqing Zhang
Haolin Song
Shaoyu Li
Ming Zhou
Dawei Song
141
230
0
14 Jan 2022
Black-Box Tuning for Language-Model-as-a-Service
Black-Box Tuning for Language-Model-as-a-Service
Tianxiang Sun
Yunfan Shao
Hong Qian
Xuanjing Huang
Xipeng Qiu
VLM
181
275
0
10 Jan 2022
Improved Input Reprogramming for GAN Conditioning
Improved Input Reprogramming for GAN Conditioning
Tuan Dinh
Daewon Seo
Zhixu Du
Liang Shang
Kangwook Lee
AI4CE
103
8
0
07 Jan 2022
Efficient Large Scale Language Modeling with Mixtures of Experts
Efficient Large Scale Language Modeling with Mixtures of Experts
Mikel Artetxe
Shruti Bhosale
Naman Goyal
Todor Mihaylov
Myle Ott
...
Jeff Wang
Luke Zettlemoyer
Mona T. Diab
Zornitsa Kozareva
Ves Stoyanov
MoE
239
201
0
20 Dec 2021
Prompt Tuning GPT-2 language model for parameter-efficient domain
  adaptation of ASR systems
Prompt Tuning GPT-2 language model for parameter-efficient domain adaptation of ASR systems
Saket Dingliwal
Ashish Shenoy
S. Bodapati
Ankur Gandhe
R. Gadde
Katrin Kirchhoff
VLM
55
4
0
16 Dec 2021
Learning to Prompt for Continual Learning
Learning to Prompt for Continual Learning
Zifeng Wang
Zizhao Zhang
Chen-Yu Lee
Han Zhang
Ruoxi Sun
Xiaoqi Ren
Guolong Su
Vincent Perot
Jennifer Dy
Tomas Pfister
CLLVPVLMKELMVLM
107
799
0
16 Dec 2021
Analyzing the Limits of Self-Supervision in Handling Bias in Language
Analyzing the Limits of Self-Supervision in Handling Bias in Language
Lisa Bauer
Karthik Gopalakrishnan
Spandana Gella
Yang Liu
Joey Tianyi Zhou
Dilek Z. Hakkani-Tür
ELM
41
1
0
16 Dec 2021
Prompt Waywardness: The Curious Case of Discretized Interpretation of
  Continuous Prompts
Prompt Waywardness: The Curious Case of Discretized Interpretation of Continuous Prompts
Daniel Khashabi
Xinxi Lyu
Sewon Min
Lianhui Qin
Kyle Richardson
...
Hannaneh Hajishirzi
Tushar Khot
Ashish Sabharwal
Sameer Singh
Yejin Choi
114
75
0
15 Dec 2021
LMTurk: Few-Shot Learners as Crowdsourcing Workers in a
  Language-Model-as-a-Service Framework
LMTurk: Few-Shot Learners as Crowdsourcing Workers in a Language-Model-as-a-Service Framework
Mengjie Zhao
Fei Mi
Yasheng Wang
Minglei Li
Xin Jiang
Qun Liu
Hinrich Schütze
RALM
115
11
0
14 Dec 2021
VL-Adapter: Parameter-Efficient Transfer Learning for
  Vision-and-Language Tasks
VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks
Yi-Lin Sung
Jaemin Cho
Joey Tianyi Zhou
VLMVPVLM
114
358
0
13 Dec 2021
Discourse-Aware Soft Prompting for Text Generation
Discourse-Aware Soft Prompting for Text Generation
Marjan Ghazvininejad
Vladimir Karpukhin
Vera Gor
Asli Celikyilmaz
67
6
0
10 Dec 2021
MAGMA -- Multimodal Augmentation of Generative Models through
  Adapter-based Finetuning
MAGMA -- Multimodal Augmentation of Generative Models through Adapter-based Finetuning
C. Eichenberg
Sid Black
Samuel Weinbach
Letitia Parcalabescu
Anette Frank
MLLMVLM
72
101
0
09 Dec 2021
Prompting Visual-Language Models for Efficient Video Understanding
Prompting Visual-Language Models for Efficient Video Understanding
Chen Ju
Tengda Han
Kunhao Zheng
Ya Zhang
Weidi Xie
VPVLMVLM
115
384
0
08 Dec 2021
Grounded Language-Image Pre-training
Grounded Language-Image Pre-training
Liunian Harold Li
Pengchuan Zhang
Haotian Zhang
Jianwei Yang
Chunyuan Li
...
Lu Yuan
Lei Zhang
Lei Li
Kai-Wei Chang
Jianfeng Gao
ObjDVLM
175
1,070
0
07 Dec 2021
Quantifying Adaptability in Pre-trained Language Models with 500 Tasks
Quantifying Adaptability in Pre-trained Language Models with 500 Tasks
Belinda Z. Li
Jane A. Yu
Madian Khabsa
Luke Zettlemoyer
A. Halevy
Jacob Andreas
ELM
89
17
0
06 Dec 2021
VT-CLIP: Enhancing Vision-Language Models with Visual-guided Texts
VT-CLIP: Enhancing Vision-Language Models with Visual-guided Texts
Longtian Qiu
Renrui Zhang
Ziyu Guo
Wei Zhang
Zilu Guo
Ziyao Zeng
Guangnan Zhang
VLMCLIP
80
45
0
04 Dec 2021
Uni-Perceiver: Pre-training Unified Architecture for Generic Perception
  for Zero-shot and Few-shot Tasks
Uni-Perceiver: Pre-training Unified Architecture for Generic Perception for Zero-shot and Few-shot Tasks
Xizhou Zhu
Jinguo Zhu
Hao Li
Xiaoshi Wu
Xiaogang Wang
Hongsheng Li
Xiaohua Wang
Jifeng Dai
124
133
0
02 Dec 2021
A General Language Assistant as a Laboratory for Alignment
A General Language Assistant as a Laboratory for Alignment
Amanda Askell
Yuntao Bai
Anna Chen
Dawn Drain
Deep Ganguli
...
Tom B. Brown
Jack Clark
Sam McCandlish
C. Olah
Jared Kaplan
ALM
143
791
0
01 Dec 2021
True Few-Shot Learning with Prompts -- A Real-World Perspective
True Few-Shot Learning with Prompts -- A Real-World Perspective
Timo Schick
Hinrich Schütze
VLM
113
64
0
26 Nov 2021
Domain Prompt Learning for Efficiently Adapting CLIP to Unseen Domains
Domain Prompt Learning for Efficiently Adapting CLIP to Unseen Domains
Xinyu Zhang
S. Gu
Yutaka Matsuo
Yusuke Iwasawa
VLM
104
40
0
25 Nov 2021
Combined Scaling for Zero-shot Transfer Learning
Combined Scaling for Zero-shot Transfer Learning
Hieu H. Pham
Zihang Dai
Golnaz Ghiasi
Kenji Kawaguchi
Hanxiao Liu
...
Yi-Ting Chen
Minh-Thang Luong
Yonghui Wu
Mingxing Tan
Quoc V. Le
VLM
120
202
0
19 Nov 2021
Training Neural Networks with Fixed Sparse Masks
Training Neural Networks with Fixed Sparse Masks
Yi-Lin Sung
Varun Nair
Colin Raffel
FedML
109
209
0
18 Nov 2021
LAnoBERT: System Log Anomaly Detection based on BERT Masked Language
  Model
LAnoBERT: System Log Anomaly Detection based on BERT Masked Language Model
Yukyung Lee
Jina Kim
Pilsung Kang
64
84
0
18 Nov 2021
On Transferability of Prompt Tuning for Natural Language Processing
On Transferability of Prompt Tuning for Natural Language Processing
Yusheng Su
Xiaozhi Wang
Yujia Qin
Chi-Min Chan
Yankai Lin
...
Peng Li
Juanzi Li
Lei Hou
Maosong Sun
Jie Zhou
AAMLVLM
83
106
0
12 Nov 2021
Response Generation with Context-Aware Prompt Learning
Response Generation with Context-Aware Prompt Learning
X. Gu
Kang Min Yoo
Sang-Woo Lee
89
25
0
04 Nov 2021
CLUES: Few-Shot Learning Evaluation in Natural Language Understanding
CLUES: Few-Shot Learning Evaluation in Natural Language Understanding
Subhabrata Mukherjee
Xiaodong Liu
Guoqing Zheng
Saghar Hosseini
Hao Cheng
Greg Yang
Christopher Meek
Ahmed Hassan Awadallah
Jianfeng Gao
ELM
70
11
0
04 Nov 2021
An Explanation of In-context Learning as Implicit Bayesian Inference
An Explanation of In-context Learning as Implicit Bayesian Inference
Sang Michael Xie
Aditi Raghunathan
Percy Liang
Tengyu Ma
ReLMBDLVPVLMLRM
311
764
0
03 Nov 2021
OpenPrompt: An Open-source Framework for Prompt-learning
OpenPrompt: An Open-source Framework for Prompt-learning
Ning Ding
Shengding Hu
Weilin Zhao
Yulin Chen
Zhiyuan Liu
Haitao Zheng
Maosong Sun
VLMLLMAG
111
299
0
03 Nov 2021
Recent Advances in Natural Language Processing via Large Pre-Trained
  Language Models: A Survey
Recent Advances in Natural Language Processing via Large Pre-Trained Language Models: A Survey
Bonan Min
Hayley L Ross
Elior Sulem
Amir Pouran Ben Veyseh
Thien Huu Nguyen
Oscar Sainz
Eneko Agirre
Ilana Heinz
Dan Roth
LM&MAVLMAI4CE
197
1,100
0
01 Nov 2021
DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language
  Models
DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language Models
Xuxi Chen
Tianlong Chen
Weizhu Chen
Ahmed Hassan Awadallah
Zhangyang Wang
Yu Cheng
MoEALM
57
10
0
30 Oct 2021
Semi-Siamese Bi-encoder Neural Ranking Model Using Lightweight
  Fine-Tuning
Semi-Siamese Bi-encoder Neural Ranking Model Using Lightweight Fine-Tuning
Euna Jung
Jaekeol Choi
Wonjong Rhee
75
14
0
28 Oct 2021
PAGnol: An Extra-Large French Generative Model
PAGnol: An Extra-Large French Generative Model
Julien Launay
E. L. Tommasone
B. Pannier
Franccois Boniface
A. Chatelain
Alessandro Cappelli
Iacopo Poli
Djamé Seddah
AILawMoEAI4CE
79
8
0
16 Oct 2021
The Power of Prompt Tuning for Low-Resource Semantic Parsing
The Power of Prompt Tuning for Low-Resource Semantic Parsing
Nathan Schucher
Siva Reddy
H. D. Vries
VLM
163
36
0
16 Oct 2021
Control Prefixes for Parameter-Efficient Text Generation
Control Prefixes for Parameter-Efficient Text Generation
Jordan Clive
Kris Cao
Marek Rei
125
32
0
15 Oct 2021
Few-Shot Bot: Prompt-Based Learning for Dialogue Systems
Few-Shot Bot: Prompt-Based Learning for Dialogue Systems
Andrea Madotto
Zhaojiang Lin
Genta Indra Winata
Pascale Fung
93
85
0
15 Oct 2021
SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
Tu Vu
Brian Lester
Noah Constant
Rami Al-Rfou
Daniel Cer
VLMLRM
223
290
0
15 Oct 2021
Exploring Universal Intrinsic Task Subspace via Prompt Tuning
Exploring Universal Intrinsic Task Subspace via Prompt Tuning
Yujia Qin
Xiaozhi Wang
Yusheng Su
Yankai Lin
Ning Ding
...
Juanzi Li
Lei Hou
Peng Li
Maosong Sun
Jie Zhou
VLMVPVLM
190
29
0
15 Oct 2021
Meta-learning via Language Model In-context Tuning
Meta-learning via Language Model In-context Tuning
Yanda Chen
Ruiqi Zhong
Sheng Zha
George Karypis
He He
321
162
0
15 Oct 2021
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally
  Across Scales and Tasks
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Xiao Liu
Kaixuan Ji
Yicheng Fu
Weng Lam Tam
Zhengxiao Du
Zhilin Yang
Jie Tang
VLM
301
863
0
14 Oct 2021
UniPELT: A Unified Framework for Parameter-Efficient Language Model
  Tuning
UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning
Yuning Mao
Lambert Mathias
Rui Hou
Amjad Almahairi
Hao Ma
Jiawei Han
Wen-tau Yih
Madian Khabsa
90
198
0
14 Oct 2021
Previous
123...50515253
Next