ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.07297
  4. Cited By
AMP: Automatically Finding Model Parallel Strategies with Heterogeneity
  Awareness

AMP: Automatically Finding Model Parallel Strategies with Heterogeneity Awareness

13 October 2022
Dacheng Li
Hongyi Wang
Eric P. Xing
Haotong Zhang
    MoE
ArXivPDFHTML

Papers citing "AMP: Automatically Finding Model Parallel Strategies with Heterogeneity Awareness"

4 / 4 papers shown
Title
Accelerating Heterogeneous Tensor Parallelism via Flexible Workload
  Control
Accelerating Heterogeneous Tensor Parallelism via Flexible Workload Control
Zhigang Wang
Xu Zhang
Ning Wang
Chuanfei Xu
Jie Nie
Zhiqiang Wei
Yu Gu
Ge Yu
21
0
0
21 Jan 2024
Automated Tensor Model Parallelism with Overlapped Communication for
  Efficient Foundation Model Training
Automated Tensor Model Parallelism with Overlapped Communication for Efficient Foundation Model Training
Shengwei Li
Zhiquan Lai
Yanqi Hao
Weijie Liu
Ke-shi Ge
Xiaoge Deng
Dongsheng Li
KaiCheng Lu
11
10
0
25 May 2023
Does compressing activations help model parallel training?
Does compressing activations help model parallel training?
S. Bian
Dacheng Li
Hongyi Wang
Eric P. Xing
Shivaram Venkataraman
19
5
0
06 Jan 2023
Megatron-LM: Training Multi-Billion Parameter Language Models Using
  Model Parallelism
Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism
M. Shoeybi
M. Patwary
Raul Puri
P. LeGresley
Jared Casper
Bryan Catanzaro
MoE
245
1,821
0
17 Sep 2019
1