ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.15303
23
0

High-Throughput LLM inference on Heterogeneous Clusters

18 April 2025
Yi Xiong
Jinqi Huang
Wenjie Huang
Xuebing Yu
Entong Li
Zhixiong Ning
Jinhua Zhou
Li Zeng
Xin Chen
ArXivPDFHTML
Abstract

Nowadays, many companies possess various types of AI accelerators, forming heterogeneous clusters. Efficiently leveraging these clusters for high-throughput large language model (LLM) inference services can significantly reduce costs and expedite task processing. However, LLM inference on heterogeneous clusters presents two main challenges. Firstly, different deployment configurations can result in vastly different performance. The number of possible configurations is large, and evaluating the effectiveness of a specific setup is complex. Thus, finding an optimal configuration is not an easy task. Secondly, LLM inference instances within a heterogeneous cluster possess varying processing capacities, leading to different processing speeds for handling inference requests. Evaluating these capacities and designing a request scheduling algorithm that fully maximizes the potential of each instance is challenging. In this paper, we propose a high-throughput inference service system on heterogeneous clusters. First, the deployment configuration is optimized by modeling the resource amount and expected throughput and using the exhaustive search method. Second, a novel mechanism is proposed to schedule requests among instances, which fully considers the different processing capabilities of various instances. Extensive experiments show that the proposed scheduler improves throughput by 122.5% and 33.6% on two heterogeneous clusters, respectively.

View on arXiv
@article{xiong2025_2504.15303,
  title={ High-Throughput LLM inference on Heterogeneous Clusters },
  author={ Yi Xiong and Jinqi Huang and Wenjie Huang and Xuebing Yu and Entong Li and Zhixiong Ning and Jinhua Zhou and Li Zeng and Xin Chen },
  journal={arXiv preprint arXiv:2504.15303},
  year={ 2025 }
}
Comments on this paper