Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2405.07719
Cited By
USP: A Unified Sequence Parallelism Approach for Long Context Generative AI
13 May 2024
Jiarui Fang
Shangchun Zhao
Re-assign community
ArXiv
PDF
HTML
Papers citing
"USP: A Unified Sequence Parallelism Approach for Long Context Generative AI"
12 / 12 papers shown
Title
ATTENTION2D: Communication Efficient Distributed Self-Attention Mechanism
Venmugil Elango
50
0
0
20 Mar 2025
MagicInfinite: Generating Infinite Talking Videos with Your Words and Voice
Hongwei Yi
Tian Ye
Shitong Shao
Xuancheng Yang
Jiantong Zhao
...
Zeke Xie
Lei Zhu
Wei Li
Michael Lingelbach
Daquan Zhou
VGen
52
1
0
07 Mar 2025
InternVideo2.5: Empowering Video MLLMs with Long and Rich Context Modeling
Yi Wang
Xinhao Li
Ziang Yan
Yinan He
Jiashuo Yu
...
Kai Chen
Wenhai Wang
Yu Qiao
Yali Wang
Limin Wang
89
19
0
21 Jan 2025
TokenRing: An Efficient Parallelism Framework for Infinite-Context LLMs via Bidirectional Communication
Zongwu Wang
Fangxin Liu
Mingshuai Li
Li Jiang
LRM
39
0
0
29 Dec 2024
xDiT: an Inference Engine for Diffusion Transformers (DiTs) with Massive Parallelism
Jiarui Fang
Jinzhe Pan
Xibo Sun
Aoyu Li
Jiannan Wang
56
5
0
04 Nov 2024
Context Parallelism for Scalable Million-Token Inference
Amy Yang
Jingyi Yang
Aya Ibrahim
Xinfeng Xie
Bangsheng Tang
Grigory Sizov
Jeremy Reizenstein
Jongsoo Park
Jianyu Huang
MoE
LRM
64
5
0
04 Nov 2024
Efficient Training of Large Language Models on Distributed Infrastructures: A Survey
Jiangfei Duan
Shuo Zhang
Zerui Wang
Lijuan Jiang
Wenwen Qu
...
Dahua Lin
Yonggang Wen
Xin Jin
Tianwei Zhang
Peng Sun
73
8
0
29 Jul 2024
LoongTrain: Efficient Training of Long-Sequence LLMs with Head-Context Parallelism
Diandian Gu
Peng Sun
Qinghao Hu
Ting Huang
Xun Chen
...
Jiarui Fang
Yonggang Wen
Tianwei Zhang
Xin Jin
Xuanzhe Liu
LRM
45
7
0
26 Jun 2024
PipeFusion: Displaced Patch Pipeline Parallelism for Inference of Diffusion Transformer Models
Jiannan Wang
Jiarui Fang
Aoyu Li
PengCheng Yang
AI4CE
64
8
0
23 May 2024
Tutel: Adaptive Mixture-of-Experts at Scale
Changho Hwang
Wei Cui
Yifan Xiong
Ziyue Yang
Ze Liu
...
Joe Chau
Peng Cheng
Fan Yang
Mao Yang
Y. Xiong
MoE
97
110
0
07 Jun 2022
ZeRO-Offload: Democratizing Billion-Scale Model Training
Jie Ren
Samyam Rajbhandari
Reza Yazdani Aminabadi
Olatunji Ruwase
Shuangyang Yang
Minjia Zhang
Dong Li
Yuxiong He
MoE
177
414
0
18 Jan 2021
Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism
M. Shoeybi
M. Patwary
Raul Puri
P. LeGresley
Jared Casper
Bryan Catanzaro
MoE
245
1,821
0
17 Sep 2019
1