ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.07118
  4. Cited By
It's Not Just Size That Matters: Small Language Models Are Also Few-Shot
  Learners
v1v2 (latest)

It's Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners

15 September 2020
Timo Schick
Hinrich Schütze
ArXiv (abs)PDFHTML

Papers citing "It's Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners"

13 / 613 papers shown
Title
Analyzing Commonsense Emergence in Few-shot Knowledge Models
Analyzing Commonsense Emergence in Few-shot Knowledge Models
Jeff Da
Ronan Le Bras
Ximing Lu
Yejin Choi
Antoine Bosselut
AI4MHKELM
175
41
0
01 Jan 2021
WARP: Word-level Adversarial ReProgramming
WARP: Word-level Adversarial ReProgramming
Karen Hambardzumyan
Hrant Khachatrian
Jonathan May
AAML
339
354
0
01 Jan 2021
Making Pre-trained Language Models Better Few-shot Learners
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
423
1,984
0
31 Dec 2020
A Closer Look at Few-Shot Crosslingual Transfer: The Choice of Shots
  Matters
A Closer Look at Few-Shot Crosslingual Transfer: The Choice of Shots Matters
Mengjie Zhao
Yi Zhu
Ehsan Shareghi
Ivan Vulić
Roi Reichart
Anna Korhonen
Hinrich Schütze
109
64
0
31 Dec 2020
Few-Shot Text Generation with Pattern-Exploiting Training
Few-Shot Text Generation with Pattern-Exploiting Training
Timo Schick
Hinrich Schütze
106
148
0
22 Dec 2020
Parameter-Efficient Transfer Learning with Diff Pruning
Parameter-Efficient Transfer Learning with Diff Pruning
Demi Guo
Alexander M. Rush
Yoon Kim
92
406
0
14 Dec 2020
UmBERTo-MTSA @ AcCompl-It: Improving Complexity and Acceptability
  Prediction with Multi-task Learning on Self-Supervised Annotations
UmBERTo-MTSA @ AcCompl-It: Improving Complexity and Acceptability Prediction with Multi-task Learning on Self-Supervised Annotations
Gabriele Sarti
19
5
0
10 Nov 2020
A Survey on Recent Approaches for Natural Language Processing in
  Low-Resource Scenarios
A Survey on Recent Approaches for Natural Language Processing in Low-Resource Scenarios
Michael A. Hedderich
Lukas Lange
Heike Adel
Jannik Strötgen
Dietrich Klakow
347
301
0
23 Oct 2020
A Mathematical Exploration of Why Language Models Help Solve Downstream
  Tasks
A Mathematical Exploration of Why Language Models Help Solve Downstream Tasks
Nikunj Saunshi
Sadhika Malladi
Sanjeev Arora
87
89
0
07 Oct 2020
Data-Efficient Pretraining via Contrastive Self-Supervision
Data-Efficient Pretraining via Contrastive Self-Supervision
Nils Rethmeier
Isabelle Augenstein
100
21
0
02 Oct 2020
Which *BERT? A Survey Organizing Contextualized Encoders
Which *BERT? A Survey Organizing Contextualized Encoders
Patrick Xia
Shijie Wu
Benjamin Van Durme
62
50
0
02 Oct 2020
Critical Thinking for Language Models
Critical Thinking for Language Models
Gregor Betz
Christian Voigt
Kyle Richardson
SyDaReLMLRMAI4CE
111
35
0
15 Sep 2020
Pre-trained Models for Natural Language Processing: A Survey
Pre-trained Models for Natural Language Processing: A Survey
Xipeng Qiu
Tianxiang Sun
Yige Xu
Yunfan Shao
Ning Dai
Xuanjing Huang
LM&MAVLM
388
1,498
0
18 Mar 2020
Previous
123...111213