ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.14851
5
0

Efficient Serving of LLM Applications with Probabilistic Demand Modeling

17 June 2025
Yifei Liu
Zuo Gan
Zhenghao Gan
Weiye Wang
Chen Chen
Yizhou Shan
Xusheng Chen
Zhenhua Han
Yifei Zhu
Shixuan Sun
Minyi Guo
ArXiv (abs)PDFHTML
Main:12 Pages
15 Figures
Bibliography:3 Pages
Abstract

Applications based on Large Language Models (LLMs) contains a series of tasks to address real-world problems with boosted capability, which have dynamic demand volumes on diverse backends. Existing serving systems treat the resource demands of LLM applications as a blackbox, compromising end-to-end efficiency due to improper queuing order and backend warm up latency. We find that the resource demands of LLM applications can be modeled in a general and accurate manner with Probabilistic Demand Graph (PDGraph). We then propose Hermes, which leverages PDGraph for efficient serving of LLM applications. Confronting probabilistic demand description, Hermes applies the Gittins policy to determine the scheduling order that can minimize the average application completion time. It also uses the PDGraph model to help prewarm cold backends at proper moments. Experiments with diverse LLM applications confirm that Hermes can effectively improve the application serving efficiency, reducing the average completion time by over 70% and the P95 completion time by over 80%.

View on arXiv
@article{liu2025_2506.14851,
  title={ Efficient Serving of LLM Applications with Probabilistic Demand Modeling },
  author={ Yifei Liu and Zuo Gan and Zhenghao Gan and Weiye Wang and Chen Chen and Yizhou Shan and Xusheng Chen and Zhenhua Han and Yifei Zhu and Shixuan Sun and Minyi Guo },
  journal={arXiv preprint arXiv:2506.14851},
  year={ 2025 }
}
Comments on this paper