Rethinking Dynamic Networks and Heterogeneous Computing with Automatic Parallelization

Hybrid parallelism techniques are essential for efficiently training large language models (LLMs). Nevertheless, current automatic parallel planning frameworks often overlook the simultaneous consideration of node heterogeneity and dynamic network topology changes, limiting their effectiveness in practical applications. In this paper, we address these limitations by modeling heterogeneous nodes within dynamically changing network environments and leveraging simulation-based strategies to determine optimal parallel configurations. Our approach enables fine-grained workload allocation tailored for heterogeneous nodes and complex network scenarios, achieving performance competitive with state-of-the-art methods under regular and stable network conditions. Additionally, we introduce a strategy pruning technique to rapidly discard infeasible parallel configurations, substantially reducing the search space and accelerating the search process through parallel execution within the simulator. Preliminary evaluations confirm that our method notably enhances training performance on heterogeneous nodes and demonstrates improved adaptability in complex, dynamic scenarios such as cloud computing environments.
View on arXiv@article{wu2025_2506.02787, title={ Rethinking Dynamic Networks and Heterogeneous Computing with Automatic Parallelization }, author={ Ruilong Wu and Xinjiao Li and Yisu Wang and Xinyu Chen and Dirk Kutscher }, journal={arXiv preprint arXiv:2506.02787}, year={ 2025 } }