ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2101.00027
  4. Cited By
The Pile: An 800GB Dataset of Diverse Text for Language Modeling

The Pile: An 800GB Dataset of Diverse Text for Language Modeling

31 December 2020
Leo Gao
Stella Biderman
Sid Black
Laurence Golding
Travis Hoppe
Charles Foster
Jason Phang
Horace He
Anish Thite
Noa Nabeshima
Shawn Presser
Connor Leahy
    AIMat
ArXivPDFHTML

Papers citing "The Pile: An 800GB Dataset of Diverse Text for Language Modeling"

50 / 399 papers shown
Title
ByGPT5: End-to-End Style-conditioned Poetry Generation with Token-free
  Language Models
ByGPT5: End-to-End Style-conditioned Poetry Generation with Token-free Language Models
Jonas Belouadi
Steffen Eger
54
24
0
20 Dec 2022
NusaCrowd: Open Source Initiative for Indonesian NLP Resources
NusaCrowd: Open Source Initiative for Indonesian NLP Resources
Samuel Cahyawijaya
Holy Lovenia
Alham Fikri Aji
Genta Indra Winata
Bryan Wilie
...
Timothy Baldwin
Sebastian Ruder
Herry Sujaini
S. Sakti
Ayu Purwarianti
36
48
0
19 Dec 2022
Natural Language to Code Generation in Interactive Data Science
  Notebooks
Natural Language to Code Generation in Interactive Data Science Notebooks
Pengcheng Yin
Wen-Ding Li
Kefan Xiao
Abhishek Rao
Yeming Wen
...
Paige Bailey
Michele Catasta
Henryk Michalewski
Oleksandr Polozov
Charles Sutton
33
57
0
19 Dec 2022
Synthesis and Evaluation of a Domain-specific Large Data Set for
  Dungeons & Dragons
Synthesis and Evaluation of a Domain-specific Large Data Set for Dungeons & Dragons
Akila Peiris
Nisansa de Silva
27
5
0
18 Dec 2022
Prompting Is Programming: A Query Language for Large Language Models
Prompting Is Programming: A Query Language for Large Language Models
Luca Beurer-Kellner
Marc Fischer
Martin Vechev
LRM
48
94
0
12 Dec 2022
A Survey on Natural Language Processing for Programming
A Survey on Natural Language Processing for Programming
Qingfu Zhu
Xianzhen Luo
Fang Liu
Cuiyun Gao
Wanxiang Che
25
1
0
12 Dec 2022
DeepSpeed Data Efficiency: Improving Deep Learning Model Quality and
  Training Efficiency via Efficient Data Sampling and Routing
DeepSpeed Data Efficiency: Improving Deep Learning Model Quality and Training Efficiency via Efficient Data Sampling and Routing
Conglong Li
Z. Yao
Xiaoxia Wu
Minjia Zhang
Connor Holmes
Cheng Li
Yuxiong He
27
24
0
07 Dec 2022
MegaBlocks: Efficient Sparse Training with Mixture-of-Experts
MegaBlocks: Efficient Sparse Training with Mixture-of-Experts
Trevor Gale
Deepak Narayanan
C. Young
Matei A. Zaharia
MoE
16
102
0
29 Nov 2022
Understanding BLOOM: An empirical study on diverse NLP tasks
Understanding BLOOM: An empirical study on diverse NLP tasks
Parag Dakle
Sai Krishna Rallabandi
Preethi Raghavan
AI4CE
36
3
0
27 Nov 2022
Contextual Expressive Text-to-Speech
Contextual Expressive Text-to-Speech
Jianhong Tu
Zeyu Cui
Xiaohuan Zhou
Siqi Zheng
Kaiqin Hu
Ju Fan
Chang Zhou
17
2
0
26 Nov 2022
Validating Large Language Models with ReLM
Validating Large Language Models with ReLM
Michael Kuchnik
Virginia Smith
George Amvrosiadis
32
27
0
21 Nov 2022
SmoothQuant: Accurate and Efficient Post-Training Quantization for Large
  Language Models
SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models
Guangxuan Xiao
Ji Lin
Mickael Seznec
Hao Wu
Julien Demouth
Song Han
MQ
61
737
0
18 Nov 2022
Random-LTD: Random and Layerwise Token Dropping Brings Efficient
  Training for Large-scale Transformers
Random-LTD: Random and Layerwise Token Dropping Brings Efficient Training for Large-scale Transformers
Z. Yao
Xiaoxia Wu
Conglong Li
Connor Holmes
Minjia Zhang
Cheng-rong Li
Yuxiong He
28
11
0
17 Nov 2022
Execution-based Evaluation for Data Science Code Generation Models
Execution-based Evaluation for Data Science Code Generation Models
Junjie Huang
Chenglong Wang
Jipeng Zhang
Cong Yan
Haotian Cui
J. Inala
Colin B. Clement
Nan Duan
Jianfeng Gao
ELM
41
35
0
17 Nov 2022
Large Language Models Struggle to Learn Long-Tail Knowledge
Large Language Models Struggle to Learn Long-Tail Knowledge
Nikhil Kandpal
H. Deng
Adam Roberts
Eric Wallace
Colin Raffel
RALM
KELM
41
382
0
15 Nov 2022
Evaluating How Fine-tuning on Bimodal Data Effects Code Generation
Evaluating How Fine-tuning on Bimodal Data Effects Code Generation
Gabriel Orlanski
Seonhye Yang
Michael Healy
ALM
21
5
0
15 Nov 2022
Logical Tasks for Measuring Extrapolation and Rule Comprehension
Logical Tasks for Measuring Extrapolation and Rule Comprehension
Ippei Fujisawa
Ryota Kanai
ELM
LRM
28
4
0
14 Nov 2022
The CRINGE Loss: Learning what language not to model
The CRINGE Loss: Learning what language not to model
Leonard Adolphs
Tianyu Gao
Jing Xu
Kurt Shuster
Sainbayar Sukhbaatar
Jason Weston
MU
23
34
0
10 Nov 2022
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
BigScience Workshop
:
Teven Le Scao
Angela Fan
Christopher Akiki
...
Zhongli Xie
Zifan Ye
M. Bras
Younes Belkada
Thomas Wolf
VLM
116
2,310
0
09 Nov 2022
Astronomia ex machina: a history, primer, and outlook on neural networks
  in astronomy
Astronomia ex machina: a history, primer, and outlook on neural networks in astronomy
Michael J. Smith
James E. Geach
35
32
0
07 Nov 2022
Truncation Sampling as Language Model Desmoothing
Truncation Sampling as Language Model Desmoothing
John Hewitt
Christopher D. Manning
Percy Liang
BDL
44
75
0
27 Oct 2022
Will we run out of data? Limits of LLM scaling based on human-generated
  data
Will we run out of data? Limits of LLM scaling based on human-generated data
Pablo Villalobos
A. Ho
J. Sevilla
T. Besiroglu
Lennart Heim
Marius Hobbhahn
ALM
33
111
0
26 Oct 2022
Weakly Supervised Data Augmentation Through Prompting for Dialogue
  Understanding
Weakly Supervised Data Augmentation Through Prompting for Dialogue Understanding
Maximillian Chen
Alexandros Papangelis
Chenyang Tao
Andrew Rosenbaum
Seokhwan Kim
Yang Liu
Zhou Yu
Dilek Z. Hakkani-Tür
39
32
0
25 Oct 2022
Neural Theory-of-Mind? On the Limits of Social Intelligence in Large LMs
Neural Theory-of-Mind? On the Limits of Social Intelligence in Large LMs
Maarten Sap
Ronan Le Bras
Daniel Fried
Yejin Choi
27
207
0
24 Oct 2022
lo-fi: distributed fine-tuning without communication
lo-fi: distributed fine-tuning without communication
Mitchell Wortsman
Suchin Gururangan
Shen Li
Ali Farhadi
Ludwig Schmidt
Michael G. Rabbat
Ari S. Morcos
32
24
0
19 Oct 2022
Attribution and Obfuscation of Neural Text Authorship: A Data Mining
  Perspective
Attribution and Obfuscation of Neural Text Authorship: A Data Mining Perspective
Adaku Uchendu
Thai Le
Dongwon Lee
DeLMO
32
40
0
19 Oct 2022
Model Criticism for Long-Form Text Generation
Model Criticism for Long-Form Text Generation
Yuntian Deng
Volodymyr Kuleshov
Alexander M. Rush
41
19
0
16 Oct 2022
Machine Generated Text: A Comprehensive Survey of Threat Models and
  Detection Methods
Machine Generated Text: A Comprehensive Survey of Threat Models and Detection Methods
Evan Crothers
Nathalie Japkowicz
H. Viktor
DeLMO
38
107
0
13 Oct 2022
EleutherAI: Going Beyond "Open Science" to "Science in the Open"
EleutherAI: Going Beyond "Open Science" to "Science in the Open"
Jason Phang
Herbie Bradley
Leo Gao
Louis Castricato
Stella Biderman
VLM
48
12
0
12 Oct 2022
Few-Shot Anaphora Resolution in Scientific Protocols via Mixtures of
  In-Context Experts
Few-Shot Anaphora Resolution in Scientific Protocols via Mixtures of In-Context Experts
Nghia T. Le
Fan Bai
Alan Ritter
35
12
0
07 Oct 2022
Ask Me Anything: A simple strategy for prompting language models
Ask Me Anything: A simple strategy for prompting language models
Simran Arora
A. Narayan
Mayee F. Chen
Laurel J. Orr
Neel Guha
Kush S. Bhatia
Ines Chami
Frederic Sala
Christopher Ré
ReLM
LRM
220
208
0
05 Oct 2022
GLM-130B: An Open Bilingual Pre-trained Model
GLM-130B: An Open Bilingual Pre-trained Model
Aohan Zeng
Xiao Liu
Zhengxiao Du
Zihan Wang
Hanyu Lai
...
Jidong Zhai
Wenguang Chen
Peng-Zhen Zhang
Yuxiao Dong
Jie Tang
BDL
LRM
253
1,073
0
05 Oct 2022
Adapting Pretrained Text-to-Text Models for Long Text Sequences
Adapting Pretrained Text-to-Text Models for Long Text Sequences
Wenhan Xiong
Anchit Gupta
Shubham Toshniwal
Yashar Mehdad
Wen-tau Yih
RALM
VLM
57
30
0
21 Sep 2022
On the Relation between Sensitivity and Accuracy in In-context Learning
On the Relation between Sensitivity and Accuracy in In-context Learning
Yanda Chen
Chen Zhao
Zhou Yu
Kathleen McKeown
He He
182
77
0
16 Sep 2022
FP8 Formats for Deep Learning
FP8 Formats for Deep Learning
Paulius Micikevicius
Dusan Stosic
N. Burgess
Marius Cornea
Pradeep Dubey
...
Naveen Mellempudi
S. Oberman
M. Shoeybi
Michael Siu
Hao Wu
BDL
VLM
MQ
74
122
0
12 Sep 2022
MultiPL-E: A Scalable and Extensible Approach to Benchmarking Neural
  Code Generation
MultiPL-E: A Scalable and Extensible Approach to Benchmarking Neural Code Generation
Federico Cassano
John Gouwar
Daniel Nguyen
S. Nguyen
Luna Phipps-Costin
...
Carolyn Jane Anderson
Molly Q. Feldman
Arjun Guha
Michael Greenberg
Abhinav Jangda
ELM
24
81
0
17 Aug 2022
PanGu-Coder: Program Synthesis with Function-Level Language Modeling
PanGu-Coder: Program Synthesis with Function-Level Language Modeling
Fenia Christopoulou
Gerasimos Lampouras
Milan Gritta
Guchun Zhang
Yinpeng Guo
...
Guangtai Liang
Jia Wei
Xin Jiang
Qianxiang Wang
Qun Liu
ELM
SyDa
ALM
45
74
0
22 Jul 2022
Contrastive Adapters for Foundation Model Group Robustness
Contrastive Adapters for Foundation Model Group Robustness
Michael Zhang
Christopher Ré
VLM
18
61
0
14 Jul 2022
Language Modelling with Pixels
Language Modelling with Pixels
Phillip Rust
Jonas F. Lotz
Emanuele Bugliarello
Elizabeth Salesky
Miryam de Lhoneux
Desmond Elliott
VLM
38
46
0
14 Jul 2022
BERTIN: Efficient Pre-Training of a Spanish Language Model using
  Perplexity Sampling
BERTIN: Efficient Pre-Training of a Spanish Language Model using Perplexity Sampling
Javier de la Rosa
E. G. Ponferrada
Paulo Villegas
Pablo González de Prado Salas
Manu Romero
María Grandury
35
95
0
14 Jul 2022
The Harvard USPTO Patent Dataset: A Large-Scale, Well-Structured, and
  Multi-Purpose Corpus of Patent Applications
The Harvard USPTO Patent Dataset: A Large-Scale, Well-Structured, and Multi-Purpose Corpus of Patent Applications
Mirac Suzgun
Luke Melas-Kyriazi
Suproteem K. Sarkar
S. Kominers
Stuart M. Shieber
46
26
0
08 Jul 2022
Pile of Law: Learning Responsible Data Filtering from the Law and a
  256GB Open-Source Legal Dataset
Pile of Law: Learning Responsible Data Filtering from the Law and a 256GB Open-Source Legal Dataset
Peter Henderson
M. Krass
Lucia Zheng
Neel Guha
Christopher D. Manning
Dan Jurafsky
Daniel E. Ho
AILaw
ELM
131
97
0
01 Jul 2022
esCorpius: A Massive Spanish Crawling Corpus
esCorpius: A Massive Spanish Crawling Corpus
Asier Gutiérrez-Fandiño
David Pérez-Fernández
Jordi Armengol-Estapé
D. Griol
Z. Callejas
40
2
0
30 Jun 2022
GODEL: Large-Scale Pre-Training for Goal-Directed Dialog
GODEL: Large-Scale Pre-Training for Goal-Directed Dialog
Baolin Peng
Michel Galley
Pengcheng He
Chris Brockett
Lars Liden
E. Nouri
Zhou Yu
Bill Dolan
Jianfeng Gao
VLM
44
73
0
22 Jun 2022
MineDojo: Building Open-Ended Embodied Agents with Internet-Scale
  Knowledge
MineDojo: Building Open-Ended Embodied Agents with Internet-Scale Knowledge
Linxi Fan
Guanzhi Wang
Yunfan Jiang
Ajay Mandlekar
Yuncong Yang
Haoyi Zhu
Andrew Tang
De-An Huang
Yuke Zhu
Anima Anandkumar
LM&Ro
48
348
0
17 Jun 2022
CERT: Continual Pre-Training on Sketches for Library-Oriented Code
  Generation
CERT: Continual Pre-Training on Sketches for Library-Oriented Code Generation
Daoguang Zan
Bei Chen
Dejian Yang
Zeqi Lin
Minsu Kim
Bei Guan
Yongji Wang
Weizhu Chen
Jian-Guang Lou
25
120
0
14 Jun 2022
Language Models are General-Purpose Interfaces
Language Models are General-Purpose Interfaces
Y. Hao
Haoyu Song
Li Dong
Shaohan Huang
Zewen Chi
Wenhui Wang
Shuming Ma
Furu Wei
MLLM
30
95
0
13 Jun 2022
ZeroQuant: Efficient and Affordable Post-Training Quantization for
  Large-Scale Transformers
ZeroQuant: Efficient and Affordable Post-Training Quantization for Large-Scale Transformers
Z. Yao
Reza Yazdani Aminabadi
Minjia Zhang
Xiaoxia Wu
Conglong Li
Yuxiong He
VLM
MQ
50
442
0
04 Jun 2022
Can Foundation Models Help Us Achieve Perfect Secrecy?
Can Foundation Models Help Us Achieve Perfect Secrecy?
Simran Arora
Christopher Ré
FedML
24
6
0
27 May 2022
Are Large Pre-Trained Language Models Leaking Your Personal Information?
Are Large Pre-Trained Language Models Leaking Your Personal Information?
Jie Huang
Hanyin Shao
Kevin Chen-Chuan Chang
PILM
22
177
0
25 May 2022
Previous
12345678
Next