Troubleshooting performance problems of large model training (LMT) is immensely challenging, due to unprecedented scales of modern GPU clusters, the complexity of software-hardware interactions, and the data intensity of the training process. Existing troubleshooting approaches designed for traditional distributed systems or datacenter networks fall short and can hardly apply to real-world training systems. In this paper, we present PerfTracker, the first online troubleshooting system utilizing fine-grained profiling, to diagnose performance issues of large-scale model training in production. PerfTracker can diagnose performance issues rooted in both hardware (e.g., GPUs and their interconnects) and software (e.g., Python functions and GPU operations). It scales to LMT on modern GPU clusters. PerfTracker effectively summarizes runtime behavior patterns of fine-grained LMT functions via online profiling, and leverages differential observability to localize the root cause with minimal production impact. PerfTracker has been deployed as a production service for large-scale GPU clusters of O(10, 000) GPUs (product homepagethis https URL). It has been used to diagnose a variety of difficult performance issues.
View on arXiv@article{guan2025_2506.08528, title={ PerfTracker: Online Performance Troubleshooting for Large-scale Model Training in Production }, author={ Yu Guan and Zhiyu Yin and Haoyu Chen and Sheng Cheng and Chaojie Yang and Kun Qian and Tianyin Xu and Yang Zhang and Hanyu Zhao and Yong Li and Wei Lin and Dennis Cai and Ennan Zhai }, journal={arXiv preprint arXiv:2506.08528}, year={ 2025 } }