Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2305.19500
Cited By
Exploring Lottery Prompts for Pre-trained Language Models
31 May 2023
Yulin Chen
Ning Ding
Xiaobin Wang
Shengding Hu
Haitao Zheng
Zhiyuan Liu
Pengjun Xie
VLM
LRM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Exploring Lottery Prompts for Pre-trained Language Models"
8 / 8 papers shown
Title
Situated Natural Language Explanations
Zining Zhu
Hao Jiang
Jingfeng Yang
Sreyashi Nag
Chao Zhang
Jie Huang
Yifan Gao
Frank Rudzicz
Bing Yin
LRM
44
1
0
27 Aug 2023
Multitask Prompted Training Enables Zero-Shot Task Generalization
Victor Sanh
Albert Webson
Colin Raffel
Stephen H. Bach
Lintang Sutawika
...
T. Bers
Stella Biderman
Leo Gao
Thomas Wolf
Alexander M. Rush
LRM
213
1,657
0
15 Oct 2021
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Xiao Liu
Kaixuan Ji
Yicheng Fu
Weng Lam Tam
Zhengxiao Du
Zhilin Yang
Jie Tang
VLM
238
806
0
14 Oct 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,848
0
18 Apr 2021
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
241
1,919
0
31 Dec 2020
Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference
Timo Schick
Hinrich Schütze
258
1,588
0
21 Jan 2020
Language Models as Knowledge Bases?
Fabio Petroni
Tim Rocktaschel
Patrick Lewis
A. Bakhtin
Yuxiang Wu
Alexander H. Miller
Sebastian Riedel
KELM
AI4MH
415
2,586
0
03 Sep 2019
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
297
6,959
0
20 Apr 2018
1