ResearchTrend.AI
  • Papers
  • Communities
  • Organizations
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2101.00190
  4. Cited By
Prefix-Tuning: Optimizing Continuous Prompts for Generation

Prefix-Tuning: Optimizing Continuous Prompts for Generation

1 January 2021
Xiang Lisa Li
Percy Liang
ArXiv (abs)PDFHTML

Papers citing "Prefix-Tuning: Optimizing Continuous Prompts for Generation"

50 / 2,700 papers shown
Title
Program Synthesis with Large Language Models
Program Synthesis with Large Language Models
Jacob Austin
Augustus Odena
Maxwell Nye
Maarten Bosma
Henryk Michalewski
...
Ellen Jiang
Carrie J. Cai
Michael Terry
Quoc V. Le
Charles Sutton
ELMAIMatReCodALM
224
2,024
0
16 Aug 2021
Accurate, yet inconsistent? Consistency Analysis on Language
  Understanding Models
Accurate, yet inconsistent? Consistency Analysis on Language Understanding Models
Myeongjun Jang
D. Kwon
Thomas Lukasiewicz
84
13
0
15 Aug 2021
The SelectGen Challenge: Finding the Best Training Samples for Few-Shot
  Neural Text Generation
The SelectGen Challenge: Finding the Best Training Samples for Few-Shot Neural Text Generation
Ernie Chang
Xiaoyu Shen
Alex Marin
Vera Demberg
57
9
0
14 Aug 2021
AMMUS : A Survey of Transformer-based Pretrained Models in Natural
  Language Processing
AMMUS : A Survey of Transformer-based Pretrained Models in Natural Language Processing
Katikapalli Subramanyam Kalyan
A. Rajasekharan
S. Sangeetha
VLMLM&MA
116
270
0
12 Aug 2021
Noisy Channel Language Model Prompting for Few-Shot Text Classification
Noisy Channel Language Model Prompting for Few-Shot Text Classification
Sewon Min
Michael Lewis
Hannaneh Hajishirzi
Luke Zettlemoyer
VLM
132
220
0
09 Aug 2021
Controlled Text Generation as Continuous Optimization with Multiple
  Constraints
Controlled Text Generation as Continuous Optimization with Multiple Constraints
Sachin Kumar
Eric Malmi
Aliaksei Severyn
Yulia Tsvetkov
BDLAI4CE
115
79
0
04 Aug 2021
Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods
  in Natural Language Processing
Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing
Pengfei Liu
Weizhe Yuan
Jinlan Fu
Zhengbao Jiang
Hiroaki Hayashi
Graham Neubig
VLMSyDa
484
4,053
0
28 Jul 2021
Wordcraft: a Human-AI Collaborative Editor for Story Writing
Wordcraft: a Human-AI Collaborative Editor for Story Writing
Andy Coenen
Luke Davis
Daphne Ippolito
Emily Reif
Ann Yuan
LLMAG
126
74
0
15 Jul 2021
HTLM: Hyper-Text Pre-Training and Prompting of Language Models
HTLM: Hyper-Text Pre-Training and Prompting of Language Models
Armen Aghajanyan
Dmytro Okhonko
M. Lewis
Mandar Joshi
Hu Xu
Gargi Ghosh
Luke Zettlemoyer
VLMVPVLMAI4TSAI4CE
78
76
0
14 Jul 2021
On Training Instance Selection for Few-Shot Neural Text Generation
On Training Instance Selection for Few-Shot Neural Text Generation
Ernie Chang
Xiaoyu Shen
Hui-Syuan Yeh
Vera Demberg
88
42
0
07 Jul 2021
Multimodal Few-Shot Learning with Frozen Language Models
Multimodal Few-Shot Learning with Frozen Language Models
Maria Tsimpoukelli
Jacob Menick
Serkan Cabi
S. M. Ali Eslami
Oriol Vinyals
Felix Hill
MLLM
279
792
0
25 Jun 2021
Cutting Down on Prompts and Parameters: Simple Few-Shot Learning with
  Language Models
Cutting Down on Prompts and Parameters: Simple Few-Shot Learning with Language Models
Robert L Logan IV
Ivana Balavzević
Eric Wallace
Fabio Petroni
Sameer Singh
Sebastian Riedel
VPVLM
129
213
0
24 Jun 2021
Do Language Models Perform Generalizable Commonsense Inference?
Do Language Models Perform Generalizable Commonsense Inference?
Peifeng Wang
Filip Ilievski
Muhao Chen
Xiang Ren
ReLMLRM
55
19
0
22 Jun 2021
BARTScore: Evaluating Generated Text as Text Generation
BARTScore: Evaluating Generated Text as Text Generation
Weizhe Yuan
Graham Neubig
Pengfei Liu
227
851
0
22 Jun 2021
CPM-2: Large-scale Cost-effective Pre-trained Language Models
CPM-2: Large-scale Cost-effective Pre-trained Language Models
Zhengyan Zhang
Yuxian Gu
Xu Han
Shengqi Chen
Chaojun Xiao
...
Minlie Huang
Wentao Han
Yang Liu
Xiaoyan Zhu
Maosong Sun
MoE
116
88
0
20 Jun 2021
LoRA: Low-Rank Adaptation of Large Language Models
LoRA: Low-Rank Adaptation of Large Language Models
J. E. Hu
Yelong Shen
Phillip Wallis
Zeyuan Allen-Zhu
Yuanzhi Li
Shean Wang
Lu Wang
Weizhu Chen
OffRLAI4TSAI4CEALMAIMat
937
10,661
0
17 Jun 2021
Knowledgeable or Educated Guess? Revisiting Language Models as Knowledge
  Bases
Knowledgeable or Educated Guess? Revisiting Language Models as Knowledge Bases
Boxi Cao
Hongyu Lin
Xianpei Han
Le Sun
Lingyong Yan
M. Liao
Tong Xue
Jin Xu
87
136
0
17 Jun 2021
Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis
  of Head and Prompt Tuning
Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis of Head and Prompt Tuning
Colin Wei
Sang Michael Xie
Tengyu Ma
160
100
0
17 Jun 2021
Algorithm to Compilation Co-design: An Integrated View of Neural Network
  Sparsity
Algorithm to Compilation Co-design: An Integrated View of Neural Network Sparsity
Fu-Ming Guo
Austin Huang
34
1
0
16 Jun 2021
Efficient (Soft) Q-Learning for Text Generation with Limited Good Data
Efficient (Soft) Q-Learning for Text Generation with Limited Good Data
Han Guo
Bowen Tan
Zhengzhong Liu
Eric P. Xing
Zhiting Hu
OffRL
98
35
0
14 Jun 2021
Compacter: Efficient Low-Rank Hypercomplex Adapter Layers
Compacter: Efficient Low-Rank Hypercomplex Adapter Layers
Rabeeh Karimi Mahabadi
James Henderson
Sebastian Ruder
MoE
169
495
0
08 Jun 2021
Signal Transformer: Complex-valued Attention and Meta-Learning for
  Signal Recognition
Signal Transformer: Complex-valued Attention and Meta-Learning for Signal Recognition
Yihong Dong
Ying Peng
Muqiao Yang
Songtao Lu
Qingjiang Shi
109
9
0
05 Jun 2021
PTR: Prompt Tuning with Rules for Text Classification
PTR: Prompt Tuning with Rules for Text Classification
Xu Han
Weilin Zhao
Ning Ding
Zhiyuan Liu
Maosong Sun
VLM
117
533
0
24 May 2021
MineGAN++: Mining Generative Models for Efficient Knowledge Transfer to
  Limited Data Domains
MineGAN++: Mining Generative Models for Efficient Knowledge Transfer to Limited Data Domains
Yaxing Wang
Abel Gonzalez-Garcia
Chenshen Wu
Luis Herranz
Fahad Shahbaz Khan
Shangling Jui
Joost van de Weijer
77
6
0
28 Apr 2021
FedNLP: Benchmarking Federated Learning Methods for Natural Language
  Processing Tasks
FedNLP: Benchmarking Federated Learning Methods for Natural Language Processing Tasks
Bill Yuchen Lin
Chaoyang He
ZiHang Zeng
Hulin Wang
Yufen Huang
Christophe Dupuy
Rahul Gupta
Mahdi Soltanolkotabi
Xiang Ren
Salman Avestimehr
FedML
88
116
0
18 Apr 2021
Cross-Attention is All You Need: Adapting Pretrained Transformers for
  Machine Translation
Cross-Attention is All You Need: Adapting Pretrained Transformers for Machine Translation
Mozhdeh Gheini
Xiang Ren
Jonathan May
LRM
99
116
0
18 Apr 2021
Constrained Language Models Yield Few-Shot Semantic Parsers
Constrained Language Models Yield Few-Shot Semantic Parsers
Richard Shin
C. H. Lin
Sam Thomson
Charles C. Chen
Subhro Roy
Emmanouil Antonios Platanios
Adam Pauls
Dan Klein
J. Eisner
Benjamin Van Durme
408
206
0
18 Apr 2021
Go Forth and Prosper: Language Modeling with Ancient Textual History
Go Forth and Prosper: Language Modeling with Ancient Textual History
Rik Koncel-Kedziorski
Noah A. Smith
KELM
40
0
0
18 Apr 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
956
4,129
0
18 Apr 2021
Transductive Learning for Abstractive News Summarization
Transductive Learning for Abstractive News Summarization
Arthur Bravzinskas
Mengwen Liu
Ramesh Nallapati
Sujith Ravi
Markus Dreyer
AI4TS
41
1
0
17 Apr 2021
KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization
  for Relation Extraction
KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization for Relation Extraction
Xiang Chen
Ningyu Zhang
Xin Xie
Shumin Deng
Yunzhi Yao
Chuanqi Tan
Fei Huang
Luo Si
Huajun Chen
193
420
0
15 Apr 2021
Learning How to Ask: Querying LMs with Mixtures of Soft Prompts
Learning How to Ask: Querying LMs with Mixtures of Soft Prompts
Guanghui Qin
J. Eisner
85
551
0
14 Apr 2021
Factual Probing Is [MASK]: Learning vs. Learning to Recall
Factual Probing Is [MASK]: Learning vs. Learning to Recall
Zexuan Zhong
Dan Friedman
Danqi Chen
121
413
0
12 Apr 2021
Adapting Language Models for Zero-shot Learning by Meta-tuning on
  Dataset and Prompt Collections
Adapting Language Models for Zero-shot Learning by Meta-tuning on Dataset and Prompt Collections
Ruiqi Zhong
Kristy Lee
Zheng Zhang
Dan Klein
196
173
0
10 Apr 2021
Plug-and-Blend: A Framework for Controllable Story Generation with
  Blended Control Codes
Plug-and-Blend: A Framework for Controllable Story Generation with Blended Control Codes
Zhiyu Lin
Mark O. Riedl
97
31
0
23 Mar 2021
Attribute Alignment: Controlling Text Generation from Pre-trained
  Language Models
Attribute Alignment: Controlling Text Generation from Pre-trained Language Models
Dian Yu
Zhou Yu
Kenji Sagae
90
40
0
20 Mar 2021
GPT Understands, Too
GPT Understands, Too
Xiao Liu
Yanan Zheng
Zhengxiao Du
Ming Ding
Yujie Qian
Zhilin Yang
Jie Tang
VLM
230
1,188
0
18 Mar 2021
Structural Adapters in Pretrained Language Models for AMR-to-text
  Generation
Structural Adapters in Pretrained Language Models for AMR-to-text Generation
Leonardo F. R. Ribeiro
Yue Zhang
Iryna Gurevych
100
72
0
16 Mar 2021
Towards Socially Intelligent Agents with Mental State Transition and
  Human Utility
Towards Socially Intelligent Agents with Mental State Transition and Human Utility
Liang Qiu
Yizhou Zhao
Yuan Liang
Pan Lu
Weiyan Shi
Zhou Yu
Song-Chun Zhu
LLMAG
101
15
0
12 Mar 2021
PADA: Example-based Prompt Learning for on-the-fly Adaptation to Unseen
  Domains
PADA: Example-based Prompt Learning for on-the-fly Adaptation to Unseen Domains
Eyal Ben-David
Nadav Oved
Roi Reichart
VLMOOD
120
94
0
24 Feb 2021
Prompt Programming for Large Language Models: Beyond the Few-Shot
  Paradigm
Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm
Laria Reynolds
Kyle McDonell
139
932
0
15 Feb 2021
Unifying Vision-and-Language Tasks via Text Generation
Unifying Vision-and-Language Tasks via Text Generation
Jaemin Cho
Jie Lei
Hao Tan
Joey Tianyi Zhou
MLLM
424
547
0
04 Feb 2021
Dimensions of Commonsense Knowledge
Dimensions of Commonsense Knowledge
Filip Ilievski
A. Oltramari
Kaixin Ma
Bin Zhang
D. McGuinness
Pedro A. Szekely
114
66
0
12 Jan 2021
WARP: Word-level Adversarial ReProgramming
WARP: Word-level Adversarial ReProgramming
Karen Hambardzumyan
Hrant Khachatrian
Jonathan May
AAML
364
354
0
01 Jan 2021
Parameter-Efficient Transfer Learning with Diff Pruning
Parameter-Efficient Transfer Learning with Diff Pruning
Demi Guo
Alexander M. Rush
Yoon Kim
115
407
0
14 Dec 2020
CTRLsum: Towards Generic Controllable Text Summarization
CTRLsum: Towards Generic Controllable Text Summarization
Junxian He
Wojciech Kry'sciñski
Bryan McCann
Nazneen Rajani
Caiming Xiong
293
142
0
08 Dec 2020
Reflective Decoding: Beyond Unidirectional Generation with Off-the-Shelf
  Language Models
Reflective Decoding: Beyond Unidirectional Generation with Off-the-Shelf Language Models
Peter West
Ximing Lu
Ari Holtzman
Chandra Bhagavatula
Jena D. Hwang
Yejin Choi
OffRL
72
13
0
16 Oct 2020
Composed Fine-Tuning: Freezing Pre-Trained Denoising Autoencoders for
  Improved Generalization
Composed Fine-Tuning: Freezing Pre-Trained Denoising Autoencoders for Improved Generalization
Sang Michael Xie
Tengyu Ma
Percy Liang
143
15
0
29 Jun 2020
AdapterFusion: Non-Destructive Task Composition for Transfer Learning
AdapterFusion: Non-Destructive Task Composition for Transfer Learning
Jonas Pfeiffer
Aishwarya Kamath
Andreas Rucklé
Kyunghyun Cho
Iryna Gurevych
CLLMoMe
355
862
0
01 May 2020
Pre-trained Models for Natural Language Processing: A Survey
Pre-trained Models for Natural Language Processing: A Survey
Xipeng Qiu
Tianxiang Sun
Yige Xu
Yunfan Shao
Ning Dai
Xuanjing Huang
LM&MAVLM
476
1,500
0
18 Mar 2020
Previous
123...525354