Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2104.05240
Cited By
v1
v2 (latest)
Factual Probing Is [MASK]: Learning vs. Learning to Recall
12 April 2021
Zexuan Zhong
Dan Friedman
Danqi Chen
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Factual Probing Is [MASK]: Learning vs. Learning to Recall"
24 / 274 papers shown
Title
Rewire-then-Probe: A Contrastive Recipe for Probing Biomedical Knowledge of Pre-trained Language Models
Zaiqiao Meng
Fangyu Liu
Ehsan Shareghi
Yixuan Su
Charlotte Collins
Nigel Collier
93
28
0
15 Oct 2021
Exploring Universal Intrinsic Task Subspace via Prompt Tuning
Yujia Qin
Xiaozhi Wang
Yusheng Su
Yankai Lin
Ning Ding
...
Juanzi Li
Lei Hou
Peng Li
Maosong Sun
Jie Zhou
VLM
VPVLM
190
29
0
15 Oct 2021
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Xiao Liu
Kaixuan Ji
Yicheng Fu
Weng Lam Tam
Zhengxiao Du
Zhilin Yang
Jie Tang
VLM
301
864
0
14 Oct 2021
A Few More Examples May Be Worth Billions of Parameters
Yuval Kirstain
Patrick Lewis
Sebastian Riedel
Omer Levy
127
21
0
08 Oct 2021
Inferring Offensiveness In Images From Natural Language Supervision
P. Schramowski
Kristian Kersting
48
2
0
08 Oct 2021
Paradigm Shift in Natural Language Processing
Tianxiang Sun
Xiangyang Liu
Xipeng Qiu
Xuanjing Huang
224
84
0
26 Sep 2021
Benchmarking Commonsense Knowledge Base Population with an Effective Evaluation Dataset
Tianqing Fang
Weiqi Wang
Sehyun Choi
Shibo Hao
Hongming Zhang
Yangqiu Song
Bin He
79
32
0
16 Sep 2021
Can Language Models be Biomedical Knowledge Bases?
Mujeen Sung
Jinhyuk Lee
Sean S. Yi
Minji Jeon
Sungdong Kim
Jaewoo Kang
AI4MH
190
107
0
15 Sep 2021
PPT: Pre-trained Prompt Tuning for Few-shot Learning
Yuxian Gu
Xu Han
Zhiyuan Liu
Minlie Huang
VLM
162
420
0
09 Sep 2021
Discrete and Soft Prompting for Multilingual Models
Mengjie Zhao
Hinrich Schütze
LRM
92
72
0
08 Sep 2021
Learning to Prompt for Vision-Language Models
Kaiyang Zhou
Jingkang Yang
Chen Change Loy
Ziwei Liu
VPVLM
CLIP
VLM
550
2,447
0
02 Sep 2021
Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners
Ningyu Zhang
Luoqiu Li
Xiang Chen
Shumin Deng
Zhen Bi
Chuanqi Tan
Fei Huang
Huajun Chen
VLM
150
180
0
30 Aug 2021
Noisy Channel Language Model Prompting for Few-Shot Text Classification
Sewon Min
Michael Lewis
Hannaneh Hajishirzi
Luke Zettlemoyer
VLM
90
220
0
09 Aug 2021
Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing
Pengfei Liu
Weizhe Yuan
Jinlan Fu
Zhengbao Jiang
Hiroaki Hayashi
Graham Neubig
VLM
SyDa
335
4,050
0
28 Jul 2021
A Closer Look at How Fine-tuning Changes BERT
Yichu Zhou
Vivek Srikumar
82
68
0
27 Jun 2021
Cutting Down on Prompts and Parameters: Simple Few-Shot Learning with Language Models
Robert L Logan IV
Ivana Balavzević
Eric Wallace
Fabio Petroni
Sameer Singh
Sebastian Riedel
VPVLM
106
212
0
24 Jun 2021
Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis of Head and Prompt Tuning
Colin Wei
Sang Michael Xie
Tengyu Ma
148
100
0
17 Jun 2021
BERTnesia: Investigating the capture and forgetting of knowledge in BERT
Jonas Wallat
Jaspreet Singh
Avishek Anand
CLL
KELM
148
60
0
05 Jun 2021
True Few-Shot Learning with Language Models
Ethan Perez
Douwe Kiela
Kyunghyun Cho
142
440
0
24 May 2021
Relational World Knowledge Representation in Contextual Language Models: A Review
Tara Safavi
Danai Koutra
KELM
100
51
0
12 Apr 2021
ASER: Towards Large-scale Commonsense Knowledge Acquisition via Higher-order Selectional Preference over Eventualities
Hongming Zhang
Xin Liu
Haojie Pan
Hao Ke
Jiefu Ou
Tianqing Fang
Yangqiu Song
77
48
0
05 Apr 2021
GPT Understands, Too
Xiao Liu
Yanan Zheng
Zhengxiao Du
Ming Ding
Yujie Qian
Zhilin Yang
Jie Tang
VLM
184
1,186
0
18 Mar 2021
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
435
1,987
0
31 Dec 2020
Pre-trained Models for Natural Language Processing: A Survey
Xipeng Qiu
Tianxiang Sun
Yige Xu
Yunfan Shao
Ning Dai
Xuanjing Huang
LM&MA
VLM
395
1,500
0
18 Mar 2020
Previous
1
2
3
4
5
6