ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2104.08691
  4. Cited By
The Power of Scale for Parameter-Efficient Prompt Tuning
v1v2 (latest)

The Power of Scale for Parameter-Efficient Prompt Tuning

18 April 2021
Brian Lester
Rami Al-Rfou
Noah Constant
    VPVLM
ArXiv (abs)PDFHTML

Papers citing "The Power of Scale for Parameter-Efficient Prompt Tuning"

50 / 2,607 papers shown
Title
MACSum: Controllable Summarization with Mixed Attributes
MACSum: Controllable Summarization with Mixed Attributes
Yusen Zhang
Yang Liu
Ziyi Yang
Yuwei Fang
Yulong Chen
Dragomir R. Radev
Chenguang Zhu
Michael Zeng
Rui Zhang
88
17
0
09 Nov 2022
Creative Writing with an AI-Powered Writing Assistant: Perspectives from
  Professional Writers
Creative Writing with an AI-Powered Writing Assistant: Perspectives from Professional Writers
Daphne Ippolito
Ann Yuan
Andy Coenen
Sehmon Burnam
103
101
0
09 Nov 2022
Zero-Label Prompt Selection
Zero-Label Prompt Selection
Chonghua Liao
Yanan Zheng
Zhilin Yang
VLM
56
7
0
09 Nov 2022
Active Example Selection for In-Context Learning
Active Example Selection for In-Context Learning
Yiming Zhang
Shi Feng
Chenhao Tan
SILMLRM
114
207
0
08 Nov 2022
COPEN: Probing Conceptual Knowledge in Pre-trained Language Models
COPEN: Probing Conceptual Knowledge in Pre-trained Language Models
Hao Peng
Xiaozhi Wang
Shengding Hu
Hailong Jin
Lei Hou
Juanzi Li
Zhiyuan Liu
Qun Liu
89
25
0
08 Nov 2022
Pretraining in Deep Reinforcement Learning: A Survey
Pretraining in Deep Reinforcement Learning: A Survey
Zhihui Xie
Zichuan Lin
Junyou Li
Shuai Li
Deheng Ye
OffRLOnRLAI4CE
85
23
0
08 Nov 2022
On the Domain Adaptation and Generalization of Pretrained Language
  Models: A Survey
On the Domain Adaptation and Generalization of Pretrained Language Models: A Survey
Xu Guo
Han Yu
LM&MAVLM
145
30
0
06 Nov 2022
Tuning Language Models as Training Data Generators for
  Augmentation-Enhanced Few-Shot Learning
Tuning Language Models as Training Data Generators for Augmentation-Enhanced Few-Shot Learning
Yu Meng
Martin Michalski
Jiaxin Huang
Yu Zhang
Tarek Abdelzaher
Jiawei Han
VLM
122
49
0
06 Nov 2022
Prompt-based Text Entailment for Low-Resource Named Entity Recognition
Prompt-based Text Entailment for Low-Resource Named Entity Recognition
Dongfang Li
Baotian Hu
Qingcai Chen
71
6
0
06 Nov 2022
Continuous Prompt Tuning Based Textual Entailment Model for E-commerce
  Entity Typing
Continuous Prompt Tuning Based Textual Entailment Model for E-commerce Entity Typing
Yibo Wang
Congying Xia
Guan Wang
Philip Yu
55
6
0
04 Nov 2022
Understanding and Mitigating Overfitting in Prompt Tuning for
  Vision-Language Models
Understanding and Mitigating Overfitting in Prompt Tuning for Vision-Language Models
Cheng Ma
Yang Liu
Jiankang Deng
Lingxi Xie
Weiming Dong
Changsheng Xu
VLMVPVLM
106
47
0
04 Nov 2022
Could Giant Pretrained Image Models Extract Universal Representations?
Could Giant Pretrained Image Models Extract Universal Representations?
Yutong Lin
Ze Liu
Zheng Zhang
Han Hu
Nanning Zheng
Stephen Lin
Yue Cao
VLM
106
9
0
03 Nov 2022
Large Language Models Are Human-Level Prompt Engineers
Large Language Models Are Human-Level Prompt Engineers
Yongchao Zhou
Andrei Ioan Muresanu
Ziwen Han
Keiran Paster
Silviu Pitis
Harris Chan
Jimmy Ba
ALMLLMAG
195
906
0
03 Nov 2022
Eliciting Knowledge from Large Pre-Trained Models for Unsupervised
  Knowledge-Grounded Conversation
Eliciting Knowledge from Large Pre-Trained Models for Unsupervised Knowledge-Grounded Conversation
Yanyang Li
Jianqiao Zhao
Michael R. Lyu
Liwei Wang
70
16
0
03 Nov 2022
Learning a Condensed Frame for Memory-Efficient Video Class-Incremental
  Learning
Learning a Condensed Frame for Memory-Efficient Video Class-Incremental Learning
Yixuan Pei
Zhiwu Qing
Jun Cen
Xiang Wang
Shiwei Zhang
Yaxiong Wang
Mingqian Tang
Nong Sang
Xueming Qian
66
13
0
02 Nov 2022
Two-stage LLM Fine-tuning with Less Specialization and More
  Generalization
Two-stage LLM Fine-tuning with Less Specialization and More Generalization
Yihan Wang
Si Si
Daliang Li
Michal Lukasik
Felix X. Yu
Cho-Jui Hsieh
Inderjit S Dhillon
Sanjiv Kumar
137
30
0
01 Nov 2022
A Close Look into the Calibration of Pre-trained Language Models
A Close Look into the Calibration of Pre-trained Language Models
Yangyi Chen
Lifan Yuan
Ganqu Cui
Zhiyuan Liu
Heng Ji
153
53
0
31 Oct 2022
AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning
Yaqing Wang
Sahaj Agarwal
Subhabrata Mukherjee
Xiaodong Liu
Jing Gao
Ahmed Hassan Awadallah
Jianfeng Gao
MoE
109
136
0
31 Oct 2022
Learning New Tasks from a Few Examples with Soft-Label Prototypes
Learning New Tasks from a Few Examples with Soft-Label Prototypes
Avyav Kumar Singh
Ekaterina Shutova
H. Yannakoudakis
VLM
89
0
0
31 Oct 2022
GPS: Genetic Prompt Search for Efficient Few-shot Learning
GPS: Genetic Prompt Search for Efficient Few-shot Learning
Hanwei Xu
Yujun Chen
Yulun Du
Nan Shao
Yanggang Wang
Haiyu Li
Zhilin Yang
VLM
63
31
0
31 Oct 2022
Parameter-Efficient Tuning Makes a Good Classification Head
Parameter-Efficient Tuning Makes a Good Classification Head
Zhuoyi Yang
Ming Ding
Yanhui Guo
Qingsong Lv
Jie Tang
VLM
108
14
0
30 Oct 2022
Knowledge-in-Context: Towards Knowledgeable Semi-Parametric Language
  Models
Knowledge-in-Context: Towards Knowledgeable Semi-Parametric Language Models
Xiaoman Pan
Wenlin Yao
Hongming Zhang
Dian Yu
Dong Yu
Jianshu Chen
KELM
296
25
0
28 Oct 2022
What Language Model to Train if You Have One Million GPU Hours?
What Language Model to Train if You Have One Million GPU Hours?
Teven Le Scao
Thomas Wang
Daniel Hesslow
Lucile Saulnier
Stas Bekman
...
Lintang Sutawika
Jaesung Tae
Zheng-Xin Yong
Julien Launay
Iz Beltagy
MoEAI4CE
320
109
0
27 Oct 2022
Multi-lingual Evaluation of Code Generation Models
Multi-lingual Evaluation of Code Generation Models
Ben Athiwaratkun
Sanjay Krishna Gouda
Zijian Wang
Xiaopeng Li
Yuchen Tian
...
Baishakhi Ray
Parminder Bhatia
Sudipta Sengupta
Dan Roth
Bing Xiang
ELM
191
177
0
26 Oct 2022
Don't Prompt, Search! Mining-based Zero-Shot Learning with Language
  Models
Don't Prompt, Search! Mining-based Zero-Shot Learning with Language Models
Mozes van de Kar
Mengzhou Xia
Danqi Chen
Mikel Artetxe
93
19
0
26 Oct 2022
Incorporating Pre-training Paradigm for Antibody Sequence-Structure
  Co-design
Incorporating Pre-training Paradigm for Antibody Sequence-Structure Co-design
Kaiyuan Gao
Lijun Wu
Jinhua Zhu
Tianbo Peng
Yingce Xia
...
Shufang Xie
Tao Qin
Haiguang Liu
Kun He
Tie-Yan Liu
92
10
0
26 Oct 2022
Inducer-tuning: Connecting Prefix-tuning and Adapter-tuning
Inducer-tuning: Connecting Prefix-tuning and Adapter-tuning
Yifan Chen
Devamanyu Hazarika
Mahdi Namazifar
Yang Liu
Di Jin
Dilek Z. Hakkani-Tür
63
4
0
26 Oct 2022
Bi-Link: Bridging Inductive Link Predictions from Text via Contrastive
  Learning of Transformers and Prompts
Bi-Link: Bridging Inductive Link Predictions from Text via Contrastive Learning of Transformers and Prompts
Bohua Peng
Shi Liang
Mobarakol Islam
79
2
0
26 Oct 2022
Learning Better Intent Representations for Financial Open Intent
  Classification
Learning Better Intent Representations for Financial Open Intent Classification
Xianzhi Li
Will Aitken
Xiao-Dan Zhu
Stephen W. Thomas
AIFin
70
8
0
25 Oct 2022
Exploring Mode Connectivity for Pre-trained Language Models
Exploring Mode Connectivity for Pre-trained Language Models
Yujia Qin
Cheng Qian
Jing Yi
Weize Chen
Yankai Lin
Xu Han
Zhiyuan Liu
Maosong Sun
Jie Zhou
97
21
0
25 Oct 2022
This joke is [MASK]: Recognizing Humor and Offense with Prompting
This joke is [MASK]: Recognizing Humor and Offense with Prompting
Junze Li
Mengjie Zhao
Yubo Xie
Antonis Maronikolakis
Pearl Pu
Hinrich Schütze
AAML
61
1
0
25 Oct 2022
Multilingual Relation Classification via Efficient and Effective
  Prompting
Multilingual Relation Classification via Efficient and Effective Prompting
Yuxuan Chen
David Harbecke
Leonhard Hennig
LRM
87
12
0
25 Oct 2022
Evaluating Parameter Efficient Learning for Generation
Evaluating Parameter Efficient Learning for Generation
Peng Xu
M. Patwary
Shrimai Prabhumoye
Virginia Adams
R. Prenger
Ming-Yu Liu
Nayeon Lee
Mohammad Shoeybi
Bryan Catanzaro
MoE
69
3
0
25 Oct 2022
Does Self-Rationalization Improve Robustness to Spurious Correlations?
Does Self-Rationalization Improve Robustness to Spurious Correlations?
Alexis Ross
Matthew E. Peters
Ana Marasović
LRM
104
13
0
24 Oct 2022
Towards Better Few-Shot and Finetuning Performance with Forgetful Causal
  Language Models
Towards Better Few-Shot and Finetuning Performance with Forgetful Causal Language Models
Hao Liu
Xinyang Geng
Lisa Lee
Igor Mordatch
Sergey Levine
Sharan Narang
Pieter Abbeel
KELMCLL
89
2
0
24 Oct 2022
Different Tunes Played with Equal Skill: Exploring a Unified
  Optimization Subspace for Delta Tuning
Different Tunes Played with Equal Skill: Exploring a Unified Optimization Subspace for Delta Tuning
Jing Yi
Weize Chen
Yujia Qin
Yankai Lin
Ning Ding
Xu Han
Zhiyuan Liu
Maosong Sun
Jie Zhou
113
2
0
24 Oct 2022
NVIDIA FLARE: Federated Learning from Simulation to Real-World
NVIDIA FLARE: Federated Learning from Simulation to Real-World
H. Roth
Yan Cheng
Yuhong Wen
Isaac Yang
Ziyue Xu
...
Daguang Xu
Nic Ma
Prerna Dogra
Mona G. Flores
Andrew Feng
FedMLAI4CE
97
101
0
24 Oct 2022
Exploring Euphemism Detection in Few-Shot and Zero-Shot Settings
Exploring Euphemism Detection in Few-Shot and Zero-Shot Settings
Sedrick Scott Keh
50
7
0
24 Oct 2022
Unsupervised Non-transferable Text Classification
Unsupervised Non-transferable Text Classification
Guangtao Zeng
Wei Lu
96
6
0
23 Oct 2022
Model ensemble instead of prompt fusion: a sample-specific knowledge
  transfer method for few-shot prompt tuning
Model ensemble instead of prompt fusion: a sample-specific knowledge transfer method for few-shot prompt tuning
Xiangyu Peng
Chen Xing
Prafulla Kumar Choubey
Chien-Sheng Wu
Caiming Xiong
VLM
137
12
0
23 Oct 2022
LMPriors: Pre-Trained Language Models as Task-Specific Priors
LMPriors: Pre-Trained Language Models as Task-Specific Priors
Kristy Choi
Chris Cundy
Sanjari Srivastava
Stefano Ermon
BDL
112
43
0
22 Oct 2022
Exploring The Landscape of Distributional Robustness for Question
  Answering Models
Exploring The Landscape of Distributional Robustness for Question Answering Models
Anas Awadalla
Mitchell Wortsman
Gabriel Ilharco
Sewon Min
Ian H. Magnusson
Hannaneh Hajishirzi
Ludwig Schmidt
ELMOODKELM
116
21
0
22 Oct 2022
Generative Prompt Tuning for Relation Classification
Generative Prompt Tuning for Relation Classification
Jiale Han
Shuai Zhao
Bo Cheng
Shengkun Ma
Wei Lu
VLM
106
27
0
22 Oct 2022
Prompt-Tuning Can Be Much Better Than Fine-Tuning on Cross-lingual
  Understanding With Multilingual Language Models
Prompt-Tuning Can Be Much Better Than Fine-Tuning on Cross-lingual Understanding With Multilingual Language Models
Lifu Tu
Caiming Xiong
Yingbo Zhou
VLMAAMLLRM
143
28
0
22 Oct 2022
Clip-Tuning: Towards Derivative-free Prompt Learning with a Mixture of
  Rewards
Clip-Tuning: Towards Derivative-free Prompt Learning with a Mixture of Rewards
Yekun Chai
Shuohuan Wang
Yu Sun
Hao Tian
Hua Wu
Haifeng Wang
VLM
89
17
0
21 Oct 2022
Efficiently Tuned Parameters are Task Embeddings
Efficiently Tuned Parameters are Task Embeddings
Wangchunshu Zhou
Canwen Xu
Julian McAuley
58
8
0
21 Oct 2022
Surgical Fine-Tuning Improves Adaptation to Distribution Shifts
Surgical Fine-Tuning Improves Adaptation to Distribution Shifts
Yoonho Lee
Annie S. Chen
Fahim Tajwar
Ananya Kumar
Huaxiu Yao
Percy Liang
Chelsea Finn
OOD
137
214
0
20 Oct 2022
Scaling Instruction-Finetuned Language Models
Scaling Instruction-Finetuned Language Models
Hyung Won Chung
Le Hou
Shayne Longpre
Barret Zoph
Yi Tay
...
Jacob Devlin
Adam Roberts
Denny Zhou
Quoc V. Le
Jason W. Wei
ReLMLRM
311
3,178
0
20 Oct 2022
Transcending Scaling Laws with 0.1% Extra Compute
Transcending Scaling Laws with 0.1% Extra Compute
Yi Tay
Jason W. Wei
Hyung Won Chung
Vinh Q. Tran
David R. So
...
Donald Metzler
Slav Petrov
N. Houlsby
Quoc V. Le
Mostafa Dehghani
LRM
109
71
0
20 Oct 2022
Late Prompt Tuning: A Late Prompt Could Be Better Than Many Prompts
Late Prompt Tuning: A Late Prompt Could Be Better Than Many Prompts
Xiangyang Liu
Tianxiang Sun
Xuanjing Huang
Xipeng Qiu
VLM
103
29
0
20 Oct 2022
Previous
123...444546...515253
Next