ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.03648
  4. Cited By
A Mathematical Exploration of Why Language Models Help Solve Downstream
  Tasks

A Mathematical Exploration of Why Language Models Help Solve Downstream Tasks

7 October 2020
Nikunj Saunshi
Sadhika Malladi
Sanjeev Arora
ArXivPDFHTML

Papers citing "A Mathematical Exploration of Why Language Models Help Solve Downstream Tasks"

16 / 66 papers shown
Title
Learning To Retrieve Prompts for In-Context Learning
Learning To Retrieve Prompts for In-Context Learning
Ohad Rubin
Jonathan Herzig
Jonathan Berant
VPVLM
RALM
14
667
0
16 Dec 2021
Self-Supervised Representation Learning: Introduction, Advances and
  Challenges
Self-Supervised Representation Learning: Introduction, Advances and Challenges
Linus Ericsson
Henry Gouk
Chen Change Loy
Timothy M. Hospedales
SSL
OOD
AI4TS
37
274
0
18 Oct 2021
On a Benefit of Mask Language Modeling: Robustness to Simplicity Bias
On a Benefit of Mask Language Modeling: Robustness to Simplicity Bias
Ting-Rui Chiang
38
4
0
11 Oct 2021
On the Surrogate Gap between Contrastive and Supervised Losses
On the Surrogate Gap between Contrastive and Supervised Losses
Han Bao
Yoshihiro Nagano
Kento Nozawa
SSL
UQCV
41
19
0
06 Oct 2021
Comparing Text Representations: A Theory-Driven Approach
Comparing Text Representations: A Theory-Driven Approach
Gregory Yauney
David M. Mimno
26
6
0
15 Sep 2021
Cross-lingual Transfer for Text Classification with Dictionary-based
  Heterogeneous Graph
Cross-lingual Transfer for Text Classification with Dictionary-based Heterogeneous Graph
Nuttapong Chairatanakul
Noppayut Sriwatanasakdi
Nontawat Charoenphakdee
Xin Liu
T. Murata
26
4
0
09 Sep 2021
On the Transferability of Pre-trained Language Models: A Study from
  Artificial Datasets
On the Transferability of Pre-trained Language Models: A Study from Artificial Datasets
Cheng-Han Chiang
Hung-yi Lee
SyDa
37
25
0
08 Sep 2021
Differentiable Prompt Makes Pre-trained Language Models Better Few-shot
  Learners
Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners
Ningyu Zhang
Luoqiu Li
Xiang Chen
Shumin Deng
Zhen Bi
Chuanqi Tan
Fei Huang
Huajun Chen
VLM
36
171
0
30 Aug 2021
Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods
  in Natural Language Processing
Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing
Pengfei Liu
Weizhe Yuan
Jinlan Fu
Zhengbao Jiang
Hiroaki Hayashi
Graham Neubig
VLM
SyDa
73
3,853
0
28 Jul 2021
Pretext Tasks selection for multitask self-supervised speech
  representation learning
Pretext Tasks selection for multitask self-supervised speech representation learning
Salah Zaiem
Titouan Parcollet
S. Essid
Abdel Heba
SSL
24
12
0
01 Jul 2021
Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis
  of Head and Prompt Tuning
Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis of Head and Prompt Tuning
Colin Wei
Sang Michael Xie
Tengyu Ma
24
97
0
17 Jun 2021
On the Inductive Bias of Masked Language Modeling: From Statistical to
  Syntactic Dependencies
On the Inductive Bias of Masked Language Modeling: From Statistical to Syntactic Dependencies
Tianyi Zhang
Tatsunori Hashimoto
AI4CE
34
29
0
12 Apr 2021
Structure Inducing Pre-Training
Structure Inducing Pre-Training
Matthew B. A. McDermott
Brendan Yap
Peter Szolovits
Marinka Zitnik
42
18
0
18 Mar 2021
Can Pretext-Based Self-Supervised Learning Be Boosted by Downstream
  Data? A Theoretical Analysis
Can Pretext-Based Self-Supervised Learning Be Boosted by Downstream Data? A Theoretical Analysis
Jiaye Teng
Weiran Huang
Haowei He
SSL
29
11
0
05 Mar 2021
The Rediscovery Hypothesis: Language Models Need to Meet Linguistics
The Rediscovery Hypothesis: Language Models Need to Meet Linguistics
Vassilina Nikoulina
Maxat Tezekbayev
Nuradil Kozhakhmet
Madina Babazhanova
Matthias Gallé
Z. Assylbekov
34
8
0
02 Mar 2021
Predicting What You Already Know Helps: Provable Self-Supervised
  Learning
Predicting What You Already Know Helps: Provable Self-Supervised Learning
Jason D. Lee
Qi Lei
Nikunj Saunshi
Jiacheng Zhuo
SSL
19
186
0
03 Aug 2020
Previous
12