Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2205.12548
Cited By
RLPrompt: Optimizing Discrete Text Prompts with Reinforcement Learning
25 May 2022
Mingkai Deng
Jianyu Wang
Cheng-Ping Hsieh
Yihan Wang
Han Guo
Tianmin Shu
Meng Song
Eric P. Xing
Zhiting Hu
Re-assign community
ArXiv
PDF
HTML
Papers citing
"RLPrompt: Optimizing Discrete Text Prompts with Reinforcement Learning"
21 / 71 papers shown
Title
Parameter-Efficient Fine-Tuning Design Spaces
Jiaao Chen
Aston Zhang
Xingjian Shi
Mu Li
Alexander J. Smola
Diyi Yang
33
59
0
04 Jan 2023
Self-Adaptive In-Context Learning: An Information Compression Perspective for In-Context Example Selection and Ordering
Zhiyong Wu
Yaoxiang Wang
Jiacheng Ye
Lingpeng Kong
41
119
0
20 Dec 2022
Decoder Tuning: Efficient Language Understanding as Decoding
Ganqu Cui
Wentao Li
Ning Ding
Longtao Huang
Zhiyuan Liu
Maosong Sun
21
6
0
16 Dec 2022
Demystifying Prompts in Language Models via Perplexity Estimation
Hila Gonen
Srini Iyer
Terra Blevins
Noah A. Smith
Luke Zettlemoyer
LRM
32
195
0
08 Dec 2022
TEMPERA: Test-Time Prompting via Reinforcement Learning
Tianjun Zhang
Xuezhi Wang
Denny Zhou
Dale Schuurmans
Joseph E. Gonzalez
VLM
20
35
0
21 Nov 2022
Rainier: Reinforced Knowledge Introspector for Commonsense Question Answering
Jiacheng Liu
Skyler Hallinan
Ximing Lu
Pengfei He
Sean Welleck
Hannaneh Hajishirzi
Yejin Choi
RALM
21
59
0
06 Oct 2022
Discovering the Hidden Vocabulary of DALLE-2
Giannis Daras
A. Dimakis
129
64
0
01 Jun 2022
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Jason W. Wei
Xuezhi Wang
Dale Schuurmans
Maarten Bosma
Brian Ichter
F. Xia
Ed H. Chi
Quoc Le
Denny Zhou
LM&Ro
LRM
AI4CE
ReLM
364
8,495
0
28 Jan 2022
Multitask Prompted Training Enables Zero-Shot Task Generalization
Victor Sanh
Albert Webson
Colin Raffel
Stephen H. Bach
Lintang Sutawika
...
T. Bers
Stella Biderman
Leo Gao
Thomas Wolf
Alexander M. Rush
LRM
213
1,657
0
15 Oct 2021
SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
Tu Vu
Brian Lester
Noah Constant
Rami Al-Rfou
Daniel Cer
VLM
LRM
137
277
0
15 Oct 2021
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Xiao Liu
Kaixuan Ji
Yicheng Fu
Weng Lam Tam
Zhengxiao Du
Zhilin Yang
Jie Tang
VLM
238
806
0
14 Oct 2021
MSP: Multi-Stage Prompting for Making Pre-trained Language Models Better Translators
Zhixing Tan
Xiangwen Zhang
Shuo Wang
Yang Liu
VLM
LRM
213
52
0
13 Oct 2021
A Recipe For Arbitrary Text Style Transfer with Large Language Models
Emily Reif
Daphne Ippolito
Ann Yuan
Andy Coenen
Chris Callison-Burch
Jason W. Wei
224
117
0
08 Sep 2021
Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity
Yao Lu
Max Bartolo
Alastair Moore
Sebastian Riedel
Pontus Stenetorp
AILaw
LRM
279
1,121
0
18 Apr 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,848
0
18 Apr 2021
What Makes Good In-Context Examples for GPT-
3
3
3
?
Jiachang Liu
Dinghan Shen
Yizhe Zhang
Bill Dolan
Lawrence Carin
Weizhu Chen
AAML
RALM
275
1,312
0
17 Jan 2021
WARP: Word-level Adversarial ReProgramming
Karen Hambardzumyan
Hrant Khachatrian
Jonathan May
AAML
254
342
0
01 Jan 2021
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
241
1,919
0
31 Dec 2020
Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference
Timo Schick
Hinrich Schütze
258
1,588
0
21 Jan 2020
Fine-Tuning Language Models from Human Preferences
Daniel M. Ziegler
Nisan Stiennon
Jeff Wu
Tom B. Brown
Alec Radford
Dario Amodei
Paul Christiano
G. Irving
ALM
280
1,595
0
18 Sep 2019
Language Models as Knowledge Bases?
Fabio Petroni
Tim Rocktaschel
Patrick Lewis
A. Bakhtin
Yuxiang Wu
Alexander H. Miller
Sebastian Riedel
KELM
AI4MH
415
2,586
0
03 Sep 2019
Previous
1
2