21
0

Balanced and Elastic End-to-end Training of Dynamic LLMs

Abstract

To reduce computational and memory costs in Large Language Models (LLMs), dynamic workload reduction schemes like Mixture of Experts (MoEs), parameter pruning, layer freezing, sparse attention, early token exit, and Mixture of Depths (MoDs) have emerged. However, these methods introduce severe workload imbalances, limiting their practicality for large-scale distributed training. We propose DynMo, an autonomous dynamic load balancing solution that ensures optimal compute distribution when using pipeline parallelism in training dynamic models. DynMo adaptively balances workloads, dynamically packs tasks into fewer workers to free idle resources, and supports both multi-GPU single-node and multi-node systems. Compared to static training methods (Megatron-LM, DeepSpeed), DynMo accelerates training by up to 1.23x (MoEs), 3.18x (pruning), 2.23x (layer freezing), 4.02x (sparse attention), 4.52x (early exit), and 1.17x (MoDs). DynMo is available atthis https URL.

View on arXiv
@article{wahib2025_2505.14864,
  title={ Balanced and Elastic End-to-end Training of Dynamic LLMs },
  author={ Mohamed Wahib and Muhammed Abdullah Soyturk and Didem Unat },
  journal={arXiv preprint arXiv:2505.14864},
  year={ 2025 }
}
Comments on this paper