Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2310.06003
Cited By
Rethinking Memory and Communication Cost for Efficient Large Language Model Training
9 October 2023
Chan Wu
Hanxiao Zhang
Lin Ju
Jinjing Huang
Youshao Xiao
Zhaoxin Huan
Siyuan Li
Fanzhuang Meng
Lei Liang
Xiaolu Zhang
Jun Zhou
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Rethinking Memory and Communication Cost for Efficient Large Language Model Training"
5 / 5 papers shown
Title
G-Meta: Distributed Meta Learning in GPU Clusters for Large-Scale Recommender Systems
Youshao Xiao
Shangchun Zhao
Zhenglei Zhou
Zhaoxin Huan
Lin Ju
Xiaolu Zhang
Lin Wang
Jun Zhou
OffRL
39
8
0
09 Jan 2024
ZeRO++: Extremely Efficient Collective Communication for Giant Model Training
Guanhua Wang
Heyang Qin
S. A. Jacobs
Connor Holmes
Samyam Rajbhandari
Olatunji Ruwase
Feng Yan
Lei Yang
Yuxiong He
VLM
59
57
0
16 Jun 2023
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
313
11,953
0
04 Mar 2022
ZeRO-Offload: Democratizing Billion-Scale Model Training
Jie Ren
Samyam Rajbhandari
Reza Yazdani Aminabadi
Olatunji Ruwase
Shuangyang Yang
Minjia Zhang
Dong Li
Yuxiong He
MoE
177
414
0
18 Jan 2021
Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism
M. Shoeybi
M. Patwary
Raul Puri
P. LeGresley
Jared Casper
Bryan Catanzaro
MoE
245
1,821
0
17 Sep 2019
1