ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.11819
  4. Cited By
An Adaptive Placement and Parallelism Framework for Accelerating RLHF
  Training

An Adaptive Placement and Parallelism Framework for Accelerating RLHF Training

19 December 2023
Youshao Xiao
Weichang Wu
Zhenglei Zhou
Fagui Mao
Shangchun Zhao
Lin Ju
Lei Liang
Xiaolu Zhang
Jun Zhou
ArXivPDFHTML

Papers citing "An Adaptive Placement and Parallelism Framework for Accelerating RLHF Training"

9 / 9 papers shown
Title
Cosmos-Reason1: From Physical Common Sense To Embodied Reasoning
Cosmos-Reason1: From Physical Common Sense To Embodied Reasoning
Nvidia
A. Azzolini
Junjie Bai
Prithvijit Chattopadhyay
Huayu Chen
...
Xiaodong Yang
Zhuolin Yang
Jingyang Zhang
Xiaohui Zeng
Zhe Zhang
AI4CE
LM&Ro
LRM
56
5
0
18 Mar 2025
Understanding and Alleviating Memory Consumption in RLHF for LLMs
Understanding and Alleviating Memory Consumption in RLHF for LLMs
Jin Zhou
Hanmei Yang
Steven
Tang
Mingcan Xiang
Hui Guan
Tongping Liu
36
0
0
21 Oct 2024
Efficient Training of Large Language Models on Distributed
  Infrastructures: A Survey
Efficient Training of Large Language Models on Distributed Infrastructures: A Survey
Jiangfei Duan
Shuo Zhang
Zerui Wang
Lijuan Jiang
Wenwen Qu
...
Dahua Lin
Yonggang Wen
Xin Jin
Tianwei Zhang
Peng Sun
73
8
0
29 Jul 2024
AntDT: A Self-Adaptive Distributed Training Framework for Leader and
  Straggler Nodes
AntDT: A Self-Adaptive Distributed Training Framework for Leader and Straggler Nodes
Youshao Xiao
Lin Ju
Zhenglei Zhou
Siyuan Li
Zhaoxin Huan
...
Rujie Jiang
Lin Wang
Xiaolu Zhang
Lei Liang
Jun Zhou
32
1
0
15 Apr 2024
G-Meta: Distributed Meta Learning in GPU Clusters for Large-Scale
  Recommender Systems
G-Meta: Distributed Meta Learning in GPU Clusters for Large-Scale Recommender Systems
Youshao Xiao
Shangchun Zhao
Zhenglei Zhou
Zhaoxin Huan
Lin Ju
Xiaolu Zhang
Lin Wang
Jun Zhou
OffRL
39
8
0
09 Jan 2024
ZeRO++: Extremely Efficient Collective Communication for Giant Model
  Training
ZeRO++: Extremely Efficient Collective Communication for Giant Model Training
Guanhua Wang
Heyang Qin
S. A. Jacobs
Connor Holmes
Samyam Rajbhandari
Olatunji Ruwase
Feng Yan
Lei Yang
Yuxiong He
VLM
59
57
0
16 Jun 2023
Improving alignment of dialogue agents via targeted human judgements
Improving alignment of dialogue agents via targeted human judgements
Amelia Glaese
Nat McAleese
Maja Trkebacz
John Aslanides
Vlad Firoiu
...
John F. J. Mellor
Demis Hassabis
Koray Kavukcuoglu
Lisa Anne Hendricks
G. Irving
ALM
AAML
227
502
0
28 Sep 2022
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
313
11,953
0
04 Mar 2022
Megatron-LM: Training Multi-Billion Parameter Language Models Using
  Model Parallelism
Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism
M. Shoeybi
M. Patwary
Raul Puri
P. LeGresley
Jared Casper
Bryan Catanzaro
MoE
245
1,821
0
17 Sep 2019
1