ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2011.04946
  4. Cited By
When Do You Need Billions of Words of Pretraining Data?

When Do You Need Billions of Words of Pretraining Data?

10 November 2020
Yian Zhang
Alex Warstadt
Haau-Sing Li
Samuel R. Bowman
ArXivPDFHTML

Papers citing "When Do You Need Billions of Words of Pretraining Data?"

36 / 36 papers shown
Title
Findings of the BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora
Findings of the BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora
Alex Warstadt
Aaron Mueller
Leshem Choshen
E. Wilcox
Chengxu Zhuang
...
Rafael Mosquera
Bhargavi Paranjape
Adina Williams
Tal Linzen
Ryan Cotterell
38
108
0
10 Apr 2025
Shades of Zero: Distinguishing Impossibility from Inconceivability
Shades of Zero: Distinguishing Impossibility from Inconceivability
Jennifer Hu
Felix Sosa
T. Ullman
46
0
0
27 Feb 2025
BERTtime Stories: Investigating the Role of Synthetic Story Data in Language Pre-training
BERTtime Stories: Investigating the Role of Synthetic Story Data in Language Pre-training
Nikitas Theodoropoulos
Giorgos Filandrianos
Vassilis Lyberatos
Maria Lymperaiou
Giorgos Stamou
SyDa
54
1
0
24 Feb 2025
Acquiring Linguistic Knowledge from Multimodal Input
Acquiring Linguistic Knowledge from Multimodal Input
Theodor Amariucai
Alexander Scott Warstadt
CLL
31
2
0
27 Feb 2024
Visual Grounding Helps Learn Word Meanings in Low-Data Regimes
Visual Grounding Helps Learn Word Meanings in Low-Data Regimes
Chengxu Zhuang
Evelina Fedorenko
Jacob Andreas
22
10
0
20 Oct 2023
LLM4TS: Aligning Pre-Trained LLMs as Data-Efficient Time-Series
  Forecasters
LLM4TS: Aligning Pre-Trained LLMs as Data-Efficient Time-Series Forecasters
Ching Chang
Wei-Yao Wang
Wenjie Peng
Tien-Fu Chen
AI4TS
38
45
0
16 Aug 2023
Testing the Predictions of Surprisal Theory in 11 Languages
Testing the Predictions of Surprisal Theory in 11 Languages
Ethan Gotlieb Wilcox
Tiago Pimentel
Clara Meister
Ryan Cotterell
R. Levy
LRM
46
63
0
07 Jul 2023
Language-Agnostic Bias Detection in Language Models with Bias Probing
Language-Agnostic Bias Detection in Language Models with Bias Probing
Abdullatif Köksal
Omer F. Yalcin
Ahmet Akbiyik
M. Kilavuz
Anna Korhonen
Hinrich Schütze
38
1
0
22 May 2023
A Better Way to Do Masked Language Model Scoring
A Better Way to Do Masked Language Model Scoring
Carina Kauf
Anna A. Ivanova
42
22
0
17 May 2023
DrBERT: A Robust Pre-trained Model in French for Biomedical and Clinical
  domains
DrBERT: A Robust Pre-trained Model in French for Biomedical and Clinical domains
Yanis Labrak
Adrien Bazoge
Richard Dufour
Mickael Rouvier
Emmanuel Morin
B. Daille
P. Gourraud
LM&MA
20
54
0
03 Apr 2023
Revealing Weaknesses of Vietnamese Language Models Through Unanswerable
  Questions in Machine Reading Comprehension
Revealing Weaknesses of Vietnamese Language Models Through Unanswerable Questions in Machine Reading Comprehension
Son Quoc Tran
Phong Nguyen-Thuan Do
Kiet Van Nguyen
Ngan Luu-Thuy Nguyen
39
0
0
16 Mar 2023
An Overview on Language Models: Recent Developments and Outlook
An Overview on Language Models: Recent Developments and Outlook
Chengwei Wei
Yun Cheng Wang
Bin Wang
C.-C. Jay Kuo
25
42
0
10 Mar 2023
Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together!
Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together!
Shiwei Liu
Tianlong Chen
Zhenyu (Allen) Zhang
Xuxi Chen
Tianjin Huang
Ajay Jaiswal
Zhangyang Wang
32
29
0
03 Mar 2023
RESDSQL: Decoupling Schema Linking and Skeleton Parsing for Text-to-SQL
RESDSQL: Decoupling Schema Linking and Skeleton Parsing for Text-to-SQL
Haoyang Li
Jing Zhang
Cuiping Li
Hong Chen
32
172
0
12 Feb 2023
Can We Use Probing to Better Understand Fine-tuning and Knowledge
  Distillation of the BERT NLU?
Can We Use Probing to Better Understand Fine-tuning and Knowledge Distillation of the BERT NLU?
Jakub Ho'scilowicz
Marcin Sowanski
Piotr Czubowski
Artur Janicki
25
2
0
27 Jan 2023
Dissociating language and thought in large language models
Dissociating language and thought in large language models
Kyle Mahowald
Anna A. Ivanova
I. Blank
Nancy Kanwisher
J. Tenenbaum
Evelina Fedorenko
ELM
ReLM
29
209
0
16 Jan 2023
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
BigScience Workshop
:
Teven Le Scao
Angela Fan
Christopher Akiki
...
Zhongli Xie
Zifan Ye
M. Bras
Younes Belkada
Thomas Wolf
VLM
116
2,310
0
09 Nov 2022
SocioProbe: What, When, and Where Language Models Learn about
  Sociodemographics
SocioProbe: What, When, and Where Language Models Learn about Sociodemographics
Anne Lauscher
Federico Bianchi
Samuel R. Bowman
Dirk Hovy
29
7
0
08 Nov 2022
RuCoLA: Russian Corpus of Linguistic Acceptability
RuCoLA: Russian Corpus of Linguistic Acceptability
Vladislav Mikhailov
T. Shamardina
Max Ryabinin
A. Pestova
I. Smurov
Ekaterina Artemova
30
28
0
23 Oct 2022
On the Impossible Safety of Large AI Models
On the Impossible Safety of Large AI Models
El-Mahdi El-Mhamdi
Sadegh Farhadkhani
R. Guerraoui
Nirupam Gupta
L. Hoang
Rafael Pinot
Sébastien Rouault
John Stephan
30
31
0
30 Sep 2022
MonoByte: A Pool of Monolingual Byte-level Language Models
MonoByte: A Pool of Monolingual Byte-level Language Models
Hugo Queiroz Abonizio
Leandro Rodrigues de Souza
R. Lotufo
Rodrigo Nogueira
40
1
0
22 Sep 2022
Emergent Abilities of Large Language Models
Emergent Abilities of Large Language Models
Jason W. Wei
Yi Tay
Rishi Bommasani
Colin Raffel
Barret Zoph
...
Tatsunori Hashimoto
Oriol Vinyals
Percy Liang
J. Dean
W. Fedus
ELM
ReLM
LRM
60
2,344
0
15 Jun 2022
ORCA: Interpreting Prompted Language Models via Locating Supporting Data
  Evidence in the Ocean of Pretraining Data
ORCA: Interpreting Prompted Language Models via Locating Supporting Data Evidence in the Ocean of Pretraining Data
Xiaochuang Han
Yulia Tsvetkov
24
27
0
25 May 2022
minicons: Enabling Flexible Behavioral and Representational Analyses of
  Transformer Language Models
minicons: Enabling Flexible Behavioral and Representational Analyses of Transformer Language Models
Kanishka Misra
19
58
0
24 Mar 2022
Neural reality of argument structure constructions
Neural reality of argument structure constructions
Bai Li
Zining Zhu
Guillaume Thomas
Frank Rudzicz
Yang Xu
46
26
0
24 Feb 2022
An Adaptive Graph Pre-training Framework for Localized Collaborative
  Filtering
An Adaptive Graph Pre-training Framework for Localized Collaborative Filtering
Yiqi Wang
Chaozhuo Li
Zheng Liu
Mingzheng Li
Jiliang Tang
Xing Xie
Lei Chen
Philip S. Yu
28
23
0
14 Dec 2021
How much do language models copy from their training data? Evaluating
  linguistic novelty in text generation using RAVEN
How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN
R. Thomas McCoy
P. Smolensky
Tal Linzen
Jianfeng Gao
Asli Celikyilmaz
SyDa
25
119
0
18 Nov 2021
Adversarially Constructed Evaluation Sets Are More Challenging, but May
  Not Be Fair
Adversarially Constructed Evaluation Sets Are More Challenging, but May Not Be Fair
Jason Phang
Angelica Chen
William Huang
Samuel R. Bowman
AAML
28
13
0
16 Nov 2021
Recent Advances in Natural Language Processing via Large Pre-Trained
  Language Models: A Survey
Recent Advances in Natural Language Processing via Large Pre-Trained Language Models: A Survey
Bonan Min
Hayley L Ross
Elior Sulem
Amir Pouran Ben Veyseh
Thien Huu Nguyen
Oscar Sainz
Eneko Agirre
Ilana Heinz
Dan Roth
LM&MA
VLM
AI4CE
83
1,030
0
01 Nov 2021
Cross-lingual Transfer of Monolingual Models
Cross-lingual Transfer of Monolingual Models
Evangelia Gogoulou
Ariel Ekgren
T. Isbister
Magnus Sahlgren
29
17
0
15 Sep 2021
Probing Across Time: What Does RoBERTa Know and When?
Probing Across Time: What Does RoBERTa Know and When?
Leo Z. Liu
Yizhong Wang
Jungo Kasai
Hannaneh Hajishirzi
Noah A. Smith
KELM
8
80
0
16 Apr 2021
The Interplay of Variant, Size, and Task Type in Arabic Pre-trained
  Language Models
The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models
Go Inoue
Bashar Alhafni
Nurpeiis Baimukan
Houda Bouamor
Nizar Habash
35
223
0
11 Mar 2021
The Rediscovery Hypothesis: Language Models Need to Meet Linguistics
The Rediscovery Hypothesis: Language Models Need to Meet Linguistics
Vassilina Nikoulina
Maxat Tezekbayev
Nuradil Kozhakhmet
Madina Babazhanova
Matthias Gallé
Z. Assylbekov
34
8
0
02 Mar 2021
Probing Classifiers: Promises, Shortcomings, and Advances
Probing Classifiers: Promises, Shortcomings, and Advances
Yonatan Belinkov
226
405
0
24 Feb 2021
Language Models as Knowledge Bases?
Language Models as Knowledge Bases?
Fabio Petroni
Tim Rocktaschel
Patrick Lewis
A. Bakhtin
Yuxiang Wu
Alexander H. Miller
Sebastian Riedel
KELM
AI4MH
417
2,588
0
03 Sep 2019
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
297
6,959
0
20 Apr 2018
1