Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2206.01288
Cited By
Decentralized Training of Foundation Models in Heterogeneous Environments
2 June 2022
Binhang Yuan
Yongjun He
Jared Davis
Tianyi Zhang
Tri Dao
Beidi Chen
Percy Liang
Christopher Ré
Ce Zhang
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Decentralized Training of Foundation Models in Heterogeneous Environments"
18 / 18 papers shown
Title
Argus: Federated Non-convex Bilevel Learning over 6G Space-Air-Ground Integrated Network
Ya Liu
Kai Yang
Yu Zhu
Keying Yang
Haibo Zhao
21
0
0
14 May 2025
HSplitLoRA: A Heterogeneous Split Parameter-Efficient Fine-Tuning Framework for Large Language Models
Zheng Lin
Yuxin Zhang
Zhe Chen
Zihan Fang
Xianhao Chen
Praneeth Vepakomma
Wei Ni
Jun Luo
Yue Gao
MoE
46
2
0
05 May 2025
Prompt Inversion Attack against Collaborative Inference of Large Language Models
Wenjie Qu
Yuguang Zhou
Yongji Wu
Tingsong Xiao
Binhang Yuan
Heng Chang
Jiaheng Zhang
76
0
0
12 Mar 2025
Fully First-Order Methods for Decentralized Bilevel Optimization
Xiaoyu Wang
Xuxing Chen
Shiqian Ma
Tong Zhang
38
0
0
25 Oct 2024
Communication-Efficient Large-Scale Distributed Deep Learning: A Comprehensive Survey
Feng Liang
Zhen Zhang
Haifeng Lu
Victor C. M. Leung
Yanyi Guo
Xiping Hu
GNN
37
6
0
09 Apr 2024
Towards Urban General Intelligence: A Review and Outlook of Urban Foundation Models
Weijiao Zhang
Jindong Han
Zhao Xu
Hang Ni
Hao Liu
Hui Xiong
Hui Xiong
AI4CE
77
15
0
30 Jan 2024
EE-LLM: Large-Scale Training and Inference of Early-Exit Large Language Models with 3D Parallelism
Yanxi Chen
Xuchen Pan
Yaliang Li
Bolin Ding
Jingren Zhou
LRM
41
31
0
08 Dec 2023
Train ñ Trade: Foundations of Parameter Markets
Tzu-Heng Huang
Harit Vishwakarma
Frederic Sala
AIFin
29
2
0
07 Dec 2023
UniAP: Unifying Inter- and Intra-Layer Automatic Parallelism by Mixed Integer Quadratic Programming
Hao Lin
Ke Wu
Jie Li
Jun Yu Li
Wu-Jun Li
39
1
0
31 Jul 2023
When Foundation Model Meets Federated Learning: Motivations, Challenges, and Future Directions
Weiming Zhuang
Chen Chen
Lingjuan Lyu
Chong Chen
Yaochu Jin
Lingjuan Lyu
AIFin
AI4CE
99
85
0
27 Jun 2023
On Efficient Training of Large-Scale Deep Learning Models: A Literature Review
Li Shen
Yan Sun
Zhiyuan Yu
Liang Ding
Xinmei Tian
Dacheng Tao
VLM
30
41
0
07 Apr 2023
Scaling Expert Language Models with Unsupervised Domain Discovery
Suchin Gururangan
Margaret Li
M. Lewis
Weijia Shi
Tim Althoff
Noah A. Smith
Luke Zettlemoyer
MoE
25
46
0
24 Mar 2023
SWARM Parallelism: Training Large Models Can Be Surprisingly Communication-Efficient
Max Ryabinin
Tim Dettmers
Michael Diskin
Alexander Borzunov
MoE
30
31
0
27 Jan 2023
Evaluation for Change
Rishi Bommasani
ELM
40
0
0
20 Dec 2022
lo-fi: distributed fine-tuning without communication
Mitchell Wortsman
Suchin Gururangan
Shen Li
Ali Farhadi
Ludwig Schmidt
Michael G. Rabbat
Ari S. Morcos
32
24
0
19 Oct 2022
Varuna: Scalable, Low-cost Training of Massive Deep Learning Models
Sanjith Athlur
Nitika Saran
Muthian Sivathanu
Ramachandran Ramjee
Nipun Kwatra
GNN
33
80
0
07 Nov 2021
ZeRO-Offload: Democratizing Billion-Scale Model Training
Jie Ren
Samyam Rajbhandari
Reza Yazdani Aminabadi
Olatunji Ruwase
Shuangyang Yang
Minjia Zhang
Dong Li
Yuxiong He
MoE
177
416
0
18 Jan 2021
Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism
M. Shoeybi
M. Patwary
Raul Puri
P. LeGresley
Jared Casper
Bryan Catanzaro
MoE
245
1,826
0
17 Sep 2019
1