Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2209.15206
Cited By
What Makes Pre-trained Language Models Better Zero-shot Learners?
30 September 2022
Jinghui Lu
Dongsheng Zhu
Weidong Han
Rui Zhao
Brian Mac Namee
Fei Tan
Re-assign community
ArXiv
PDF
HTML
Papers citing
"What Makes Pre-trained Language Models Better Zero-shot Learners?"
7 / 7 papers shown
Title
Prompt-Based Bias Calibration for Better Zero/Few-Shot Learning of Language Models
Kang He
Yinghan Long
Kaushik Roy
28
2
0
15 Feb 2024
SDA: Simple Discrete Augmentation for Contrastive Sentence Representation Learning
Dongsheng Zhu
Zhenyu Mao
Jinghui Lu
Rui Zhao
Fei Tan
28
1
0
08 Oct 2022
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,848
0
18 Apr 2021
What Makes Good In-Context Examples for GPT-
3
3
3
?
Jiachang Liu
Dinghan Shen
Yizhe Zhang
Bill Dolan
Lawrence Carin
Weizhu Chen
AAML
RALM
275
1,312
0
17 Jan 2021
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
241
1,919
0
31 Dec 2020
Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference
Timo Schick
Hinrich Schütze
258
1,588
0
21 Jan 2020
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
297
6,956
0
20 Apr 2018
1