ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2104.08691
  4. Cited By
The Power of Scale for Parameter-Efficient Prompt Tuning
v1v2 (latest)

The Power of Scale for Parameter-Efficient Prompt Tuning

18 April 2021
Brian Lester
Rami Al-Rfou
Noah Constant
    VPVLM
ArXiv (abs)PDFHTML

Papers citing "The Power of Scale for Parameter-Efficient Prompt Tuning"

50 / 2,607 papers shown
Title
LFPT5: A Unified Framework for Lifelong Few-shot Language Learning Based
  on Prompt Tuning of T5
LFPT5: A Unified Framework for Lifelong Few-shot Language Learning Based on Prompt Tuning of T5
Chengwei Qin
Shafiq Joty
CLL
216
104
0
14 Oct 2021
Symbolic Knowledge Distillation: from General Language Models to
  Commonsense Models
Symbolic Knowledge Distillation: from General Language Models to Commonsense Models
Peter West
Chandrasekhar Bhagavatula
Jack Hessel
Jena D. Hwang
Liwei Jiang
Ronan Le Bras
Ximing Lu
Sean Welleck
Yejin Choi
SyDa
143
333
0
14 Oct 2021
MSP: Multi-Stage Prompting for Making Pre-trained Language Models Better
  Translators
MSP: Multi-Stage Prompting for Making Pre-trained Language Models Better Translators
Zhixing Tan
Xiangwen Zhang
Shuo Wang
Yang Liu
VLMLRM
285
53
0
13 Oct 2021
Prompt-tuning in ASR systems for efficient domain-adaptation
Prompt-tuning in ASR systems for efficient domain-adaptation
Saket Dingliwal
Ashish Shenoy
S. Bodapati
Ankur Gandhe
R. Gadde
Katrin Kirchhoff
64
3
0
13 Oct 2021
Differentially Private Fine-tuning of Language Models
Differentially Private Fine-tuning of Language Models
Da Yu
Saurabh Naik
A. Backurs
Sivakanth Gopi
Huseyin A. Inan
...
Y. Lee
Andre Manoel
Lukas Wutschitz
Sergey Yekhanin
Huishuai Zhang
262
373
0
13 Oct 2021
LiST: Lite Prompted Self-training Makes Parameter-Efficient Few-shot
  Learners
LiST: Lite Prompted Self-training Makes Parameter-Efficient Few-shot Learners
Yaqing Wang
Subhabrata Mukherjee
Xiaodong Liu
Jing Gao
Ahmed Hassan Awadallah
Jianfeng Gao
VLMBDL
106
11
0
12 Oct 2021
Improving Gender Fairness of Pre-Trained Language Models without
  Catastrophic Forgetting
Improving Gender Fairness of Pre-Trained Language Models without Catastrophic Forgetting
Zahra Fatemi
Chen Xing
Wenhao Liu
Caiming Xiong
CLL
82
34
0
11 Oct 2021
CLIP-Adapter: Better Vision-Language Models with Feature Adapters
CLIP-Adapter: Better Vision-Language Models with Feature Adapters
Peng Gao
Shijie Geng
Renrui Zhang
Teli Ma
Rongyao Fang
Yongfeng Zhang
Hongsheng Li
Yu Qiao
VLMCLIP
352
1,064
0
09 Oct 2021
A Few More Examples May Be Worth Billions of Parameters
A Few More Examples May Be Worth Billions of Parameters
Yuval Kirstain
Patrick Lewis
Sebastian Riedel
Omer Levy
127
21
0
08 Oct 2021
Towards a Unified View of Parameter-Efficient Transfer Learning
Towards a Unified View of Parameter-Efficient Transfer Learning
Junxian He
Chunting Zhou
Xuezhe Ma
Taylor Berg-Kirkpatrick
Graham Neubig
AAML
184
958
0
08 Oct 2021
FewNLU: Benchmarking State-of-the-Art Methods for Few-Shot Natural
  Language Understanding
FewNLU: Benchmarking State-of-the-Art Methods for Few-Shot Natural Language Understanding
Yanan Zheng
Jing Zhou
Yujie Qian
Ming Ding
Chonghua Liao
Jian Li
Ruslan Salakhutdinov
Jie Tang
Sebastian Ruder
Zhilin Yang
ELM
278
29
0
27 Sep 2021
Paradigm Shift in Natural Language Processing
Paradigm Shift in Natural Language Processing
Tianxiang Sun
Xiangyang Liu
Xipeng Qiu
Xuanjing Huang
224
82
0
26 Sep 2021
Reframing Instructional Prompts to GPTk's Language
Reframing Instructional Prompts to GPTk's Language
Swaroop Mishra
Daniel Khashabi
Chitta Baral
Yejin Choi
Hannaneh Hajishirzi
116
221
0
16 Sep 2021
Exploring Prompt-based Few-shot Learning for Grounded Dialog Generation
Exploring Prompt-based Few-shot Learning for Grounded Dialog Generation
Chujie Zheng
Minlie Huang
104
44
0
14 Sep 2021
Few-Shot Cross-Lingual Stance Detection with Sentiment-Based
  Pre-Training
Few-Shot Cross-Lingual Stance Detection with Sentiment-Based Pre-Training
Momchil Hardalov
Arnav Arora
Preslav Nakov
Isabelle Augenstein
88
63
0
13 Sep 2021
PoKE: A Prompt-based Knowledge Eliciting Approach for Event Argument
  Extraction
PoKE: A Prompt-based Knowledge Eliciting Approach for Event Argument Extraction
Jiaju Lin
Qin Chen
56
1
0
11 Sep 2021
What Changes Can Large-scale Language Models Bring? Intensive Study on
  HyperCLOVA: Billions-scale Korean Generative Pretrained Transformers
What Changes Can Large-scale Language Models Bring? Intensive Study on HyperCLOVA: Billions-scale Korean Generative Pretrained Transformers
Boseop Kim
Hyoungseok Kim
Sang-Woo Lee
Gichang Lee
Donghyun Kwak
...
Jaewook Kang
Inho Kang
Jung-Woo Ha
W. Park
Nako Sung
VLM
292
124
0
10 Sep 2021
CINS: Comprehensive Instruction for Few-shot Learning in Task-oriented
  Dialog Systems
CINS: Comprehensive Instruction for Few-shot Learning in Task-oriented Dialog Systems
Fei Mi
Yitong Li
Yasheng Wang
Xin Jiang
Qun Liu
97
43
0
10 Sep 2021
PPT: Pre-trained Prompt Tuning for Few-shot Learning
PPT: Pre-trained Prompt Tuning for Few-shot Learning
Yuxian Gu
Xu Han
Zhiyuan Liu
Minlie Huang
VLM
162
420
0
09 Sep 2021
Continuous Entailment Patterns for Lexical Inference in Context
Continuous Entailment Patterns for Lexical Inference in Context
Martin Schmitt
Hinrich Schütze
78
3
0
08 Sep 2021
Discrete and Soft Prompting for Multilingual Models
Discrete and Soft Prompting for Multilingual Models
Mengjie Zhao
Hinrich Schütze
LRM
92
72
0
08 Sep 2021
Teaching Autoregressive Language Models Complex Tasks By Demonstration
Teaching Autoregressive Language Models Complex Tasks By Demonstration
Gabriel Recchia
83
22
0
05 Sep 2021
Finetuned Language Models Are Zero-Shot Learners
Finetuned Language Models Are Zero-Shot Learners
Jason W. Wei
Maarten Bosma
Vincent Zhao
Kelvin Guu
Adams Wei Yu
Brian Lester
Nan Du
Andrew M. Dai
Quoc V. Le
ALMUQCV
346
3,806
0
03 Sep 2021
Do Prompt-Based Models Really Understand the Meaning of their Prompts?
Do Prompt-Based Models Really Understand the Meaning of their Prompts?
Albert Webson
Ellie Pavlick
LRM
136
374
0
02 Sep 2021
Learning to Prompt for Vision-Language Models
Learning to Prompt for Vision-Language Models
Kaiyang Zhou
Jingkang Yang
Chen Change Loy
Ziwei Liu
VPVLMCLIPVLM
550
2,447
0
02 Sep 2021
A Generative Approach for Mitigating Structural Biases in Natural
  Language Inference
A Generative Approach for Mitigating Structural Biases in Natural Language Inference
Dimion Asael
Zachary M. Ziegler
Yonatan Belinkov
43
8
0
31 Aug 2021
LightNER: A Lightweight Tuning Paradigm for Low-resource NER via
  Pluggable Prompting
LightNER: A Lightweight Tuning Paradigm for Low-resource NER via Pluggable Prompting
Xiang Chen
Lei Li
Shumin Deng
Chuanqi Tan
Changliang Xu
Fei Huang
Luo Si
Huajun Chen
Ningyu Zhang
VLM
126
71
0
31 Aug 2021
Differentiable Prompt Makes Pre-trained Language Models Better Few-shot
  Learners
Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners
Ningyu Zhang
Luoqiu Li
Xiang Chen
Shumin Deng
Zhen Bi
Chuanqi Tan
Fei Huang
Huajun Chen
VLM
150
180
0
30 Aug 2021
Prompt-Learning for Fine-Grained Entity Typing
Prompt-Learning for Fine-Grained Entity Typing
Ning Ding
Yulin Chen
Xu Han
Guangwei Xu
Pengjun Xie
Haitao Zheng
Zhiyuan Liu
Juan-Zi Li
Hong-Gee Kim
95
159
0
24 Aug 2021
Toward a `Standard Model' of Machine Learning
Toward a `Standard Model' of Machine Learning
Zhiting Hu
Eric Xing
95
12
0
17 Aug 2021
Program Synthesis with Large Language Models
Program Synthesis with Large Language Models
Jacob Austin
Augustus Odena
Maxwell Nye
Maarten Bosma
Henryk Michalewski
...
Ellen Jiang
Carrie J. Cai
Michael Terry
Quoc V. Le
Charles Sutton
ELMAIMatReCodALM
218
2,023
0
16 Aug 2021
Accurate, yet inconsistent? Consistency Analysis on Language
  Understanding Models
Accurate, yet inconsistent? Consistency Analysis on Language Understanding Models
Myeongjun Jang
D. Kwon
Thomas Lukasiewicz
71
13
0
15 Aug 2021
The SelectGen Challenge: Finding the Best Training Samples for Few-Shot
  Neural Text Generation
The SelectGen Challenge: Finding the Best Training Samples for Few-Shot Neural Text Generation
Ernie Chang
Xiaoyu Shen
Alex Marin
Vera Demberg
51
9
0
14 Aug 2021
AMMUS : A Survey of Transformer-based Pretrained Models in Natural
  Language Processing
AMMUS : A Survey of Transformer-based Pretrained Models in Natural Language Processing
Katikapalli Subramanyam Kalyan
A. Rajasekharan
S. Sangeetha
VLMLM&MA
107
270
0
12 Aug 2021
Noisy Channel Language Model Prompting for Few-Shot Text Classification
Noisy Channel Language Model Prompting for Few-Shot Text Classification
Sewon Min
Michael Lewis
Hannaneh Hajishirzi
Luke Zettlemoyer
VLM
90
220
0
09 Aug 2021
Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods
  in Natural Language Processing
Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing
Pengfei Liu
Weizhe Yuan
Jinlan Fu
Zhengbao Jiang
Hiroaki Hayashi
Graham Neubig
VLMSyDa
320
4,047
0
28 Jul 2021
Multimodal Few-Shot Learning with Frozen Language Models
Multimodal Few-Shot Learning with Frozen Language Models
Maria Tsimpoukelli
Jacob Menick
Serkan Cabi
S. M. Ali Eslami
Oriol Vinyals
Felix Hill
MLLM
236
792
0
25 Jun 2021
Cutting Down on Prompts and Parameters: Simple Few-Shot Learning with
  Language Models
Cutting Down on Prompts and Parameters: Simple Few-Shot Learning with Language Models
Robert L Logan IV
Ivana Balavzević
Eric Wallace
Fabio Petroni
Sameer Singh
Sebastian Riedel
VPVLM
106
212
0
24 Jun 2021
CPM-2: Large-scale Cost-effective Pre-trained Language Models
CPM-2: Large-scale Cost-effective Pre-trained Language Models
Zhengyan Zhang
Yuxian Gu
Xu Han
Shengqi Chen
Chaojun Xiao
...
Minlie Huang
Wentao Han
Yang Liu
Xiaoyan Zhu
Maosong Sun
MoE
93
88
0
20 Jun 2021
LoRA: Low-Rank Adaptation of Large Language Models
LoRA: Low-Rank Adaptation of Large Language Models
J. E. Hu
Yelong Shen
Phillip Wallis
Zeyuan Allen-Zhu
Yuanzhi Li
Shean Wang
Lu Wang
Weizhu Chen
OffRLAI4TSAI4CEALMAIMat
714
10,634
0
17 Jun 2021
Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis
  of Head and Prompt Tuning
Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis of Head and Prompt Tuning
Colin Wei
Sang Michael Xie
Tengyu Ma
148
100
0
17 Jun 2021
Efficient (Soft) Q-Learning for Text Generation with Limited Good Data
Efficient (Soft) Q-Learning for Text Generation with Limited Good Data
Han Guo
Bowen Tan
Zhengzhong Liu
Eric P. Xing
Zhiting Hu
OffRL
92
35
0
14 Jun 2021
Pre-Trained Models: Past, Present and Future
Pre-Trained Models: Past, Present and Future
Xu Han
Zhengyan Zhang
Ning Ding
Yuxian Gu
Xiao Liu
...
Jie Tang
Ji-Rong Wen
Jinhui Yuan
Wayne Xin Zhao
Jun Zhu
AIFinMQAI4MH
177
863
0
14 Jun 2021
Compacter: Efficient Low-Rank Hypercomplex Adapter Layers
Compacter: Efficient Low-Rank Hypercomplex Adapter Layers
Rabeeh Karimi Mahabadi
James Henderson
Sebastian Ruder
MoE
155
494
0
08 Jun 2021
Signal Transformer: Complex-valued Attention and Meta-Learning for
  Signal Recognition
Signal Transformer: Complex-valued Attention and Meta-Learning for Signal Recognition
Yihong Dong
Ying Peng
Muqiao Yang
Songtao Lu
Qingjiang Shi
103
9
0
05 Jun 2021
PTR: Prompt Tuning with Rules for Text Classification
PTR: Prompt Tuning with Rules for Text Classification
Xu Han
Weilin Zhao
Ning Ding
Zhiyuan Liu
Maosong Sun
VLM
106
533
0
24 May 2021
MineGAN++: Mining Generative Models for Efficient Knowledge Transfer to
  Limited Data Domains
MineGAN++: Mining Generative Models for Efficient Knowledge Transfer to Limited Data Domains
Yaxing Wang
Abel Gonzalez-Garcia
Chenshen Wu
Luis Herranz
Fahad Shahbaz Khan
Shangling Jui
Joost van de Weijer
68
6
0
28 Apr 2021
Cross-Attention is All You Need: Adapting Pretrained Transformers for
  Machine Translation
Cross-Attention is All You Need: Adapting Pretrained Transformers for Machine Translation
Mozhdeh Gheini
Xiang Ren
Jonathan May
LRM
92
116
0
18 Apr 2021
KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization
  for Relation Extraction
KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization for Relation Extraction
Xiang Chen
Ningyu Zhang
Xin Xie
Shumin Deng
Yunzhi Yao
Chuanqi Tan
Fei Huang
Luo Si
Huajun Chen
185
420
0
15 Apr 2021
Adapting Language Models for Zero-shot Learning by Meta-tuning on
  Dataset and Prompt Collections
Adapting Language Models for Zero-shot Learning by Meta-tuning on Dataset and Prompt Collections
Ruiqi Zhong
Kristy Lee
Zheng Zhang
Dan Klein
158
173
0
10 Apr 2021
Previous
123...515253
Next