Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2010.03648
Cited By
A Mathematical Exploration of Why Language Models Help Solve Downstream Tasks
7 October 2020
Nikunj Saunshi
Sadhika Malladi
Sanjeev Arora
Re-assign community
ArXiv
PDF
HTML
Papers citing
"A Mathematical Exploration of Why Language Models Help Solve Downstream Tasks"
16 / 66 papers shown
Title
Learning To Retrieve Prompts for In-Context Learning
Ohad Rubin
Jonathan Herzig
Jonathan Berant
VPVLM
RALM
14
667
0
16 Dec 2021
Self-Supervised Representation Learning: Introduction, Advances and Challenges
Linus Ericsson
Henry Gouk
Chen Change Loy
Timothy M. Hospedales
SSL
OOD
AI4TS
37
274
0
18 Oct 2021
On a Benefit of Mask Language Modeling: Robustness to Simplicity Bias
Ting-Rui Chiang
38
4
0
11 Oct 2021
On the Surrogate Gap between Contrastive and Supervised Losses
Han Bao
Yoshihiro Nagano
Kento Nozawa
SSL
UQCV
41
19
0
06 Oct 2021
Comparing Text Representations: A Theory-Driven Approach
Gregory Yauney
David M. Mimno
26
6
0
15 Sep 2021
Cross-lingual Transfer for Text Classification with Dictionary-based Heterogeneous Graph
Nuttapong Chairatanakul
Noppayut Sriwatanasakdi
Nontawat Charoenphakdee
Xin Liu
T. Murata
26
4
0
09 Sep 2021
On the Transferability of Pre-trained Language Models: A Study from Artificial Datasets
Cheng-Han Chiang
Hung-yi Lee
SyDa
37
25
0
08 Sep 2021
Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners
Ningyu Zhang
Luoqiu Li
Xiang Chen
Shumin Deng
Zhen Bi
Chuanqi Tan
Fei Huang
Huajun Chen
VLM
36
171
0
30 Aug 2021
Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing
Pengfei Liu
Weizhe Yuan
Jinlan Fu
Zhengbao Jiang
Hiroaki Hayashi
Graham Neubig
VLM
SyDa
73
3,853
0
28 Jul 2021
Pretext Tasks selection for multitask self-supervised speech representation learning
Salah Zaiem
Titouan Parcollet
S. Essid
Abdel Heba
SSL
24
12
0
01 Jul 2021
Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis of Head and Prompt Tuning
Colin Wei
Sang Michael Xie
Tengyu Ma
24
97
0
17 Jun 2021
On the Inductive Bias of Masked Language Modeling: From Statistical to Syntactic Dependencies
Tianyi Zhang
Tatsunori Hashimoto
AI4CE
34
29
0
12 Apr 2021
Structure Inducing Pre-Training
Matthew B. A. McDermott
Brendan Yap
Peter Szolovits
Marinka Zitnik
42
18
0
18 Mar 2021
Can Pretext-Based Self-Supervised Learning Be Boosted by Downstream Data? A Theoretical Analysis
Jiaye Teng
Weiran Huang
Haowei He
SSL
29
11
0
05 Mar 2021
The Rediscovery Hypothesis: Language Models Need to Meet Linguistics
Vassilina Nikoulina
Maxat Tezekbayev
Nuradil Kozhakhmet
Madina Babazhanova
Matthias Gallé
Z. Assylbekov
34
8
0
02 Mar 2021
Predicting What You Already Know Helps: Provable Self-Supervised Learning
Jason D. Lee
Qi Lei
Nikunj Saunshi
Jiacheng Zhuo
SSL
19
186
0
03 Aug 2020
Previous
1
2