EFIM: Efficient Serving of LLMs for Infilling Tasks with Improved KV Cache Reuse
- RALM

Large language models (LLMs) are often used for infilling tasks, which involve predicting or generating missing information in a given text. These tasks typically require multiple interactions with similar context. To reduce the computation of repeated historical tokens, cross-request key-value (KV) cache reuse, a technique that stores and reuses intermediate computations, has become a crucial method in multi-round interactive services. However, in infilling tasks, the KV cache reuse is often hindered by the structure of the prompt format, which typically consists of a prefix and suffix relative to the insertion point. Specifically, the KV cache of the prefix or suffix part is frequently invalidated as the other part (suffix or prefix) is incrementally generated. To address the issue, we propose EFIM, a transformed prompt format of FIM to unleash the performance potential of KV cache reuse. Although the transformed prompt can solve the inefficiency, it exposes subtoken generation problems in current LLMs, where they have difficulty generating partial words accurately. Therefore, we introduce a fragment tokenization training method which splits text into multiple fragments before tokenization during data processing. Experiments on two representative LLMs show that LLM serving with EFIM can lower the latency by 52% and improve the throughput by 98% while maintaining the original infilling capability. EFIM's source code is publicly available atthis https URL.
View on arXiv@article{guo2025_2505.21889, title={ EFIM: Efficient Serving of LLMs for Infilling Tasks with Improved KV Cache Reuse }, author={ Tianyu Guo and Hande Dong and Yichong Leng and Feng Liu and Cheater Lin and Nong Xiao and Xianwei Zhang }, journal={arXiv preprint arXiv:2505.21889}, year={ 2025 } }