14
0

Efficient and Workload-Aware LLM Serving via Runtime Layer Swapping and KV Cache Resizing

Main:9 Pages
7 Figures
Bibliography:5 Pages
1 Tables
Appendix:5 Pages
Abstract

Efficiently serving large language models (LLMs) under dynamic and bursty workloads remains a key challenge for real-world deployment. Existing serving frameworks and static model compression techniques fail to adapt to workload fluctuations, leading to either service-level objective (SLO) violations under full-precision serving or persistent accuracy degradation with static quantization. We present MorphServe, a dynamic, workload-aware LLM serving framework based on morphological adaptation. MorphServe introduces two asynchronous, token-level runtime mechanisms: quantized layer swapping, which selectively replaces less impactful layers with quantized alternatives during high-load periods, and pressure-aware KV cache resizing, which dynamically adjusts KV cache capacity in response to memory pressure. These mechanisms enable state-preserving transitions with minimum runtime overhead and are fully compatible with modern scheduling and attention techniques. Extensive experiments on Vicuna and Llama family models with real-world workloads demonstrate that MorphServe reduces average SLO violations by 92.45 percent and improves the P95 TTFT latency by 2.2x-3.9x compared to full-precision serving, without compromising generation quality. These results establish MorphServe as a practical and elastic solution for LLM deployment in dynamic environments.

View on arXiv
@article{su2025_2506.02006,
  title={ Efficient and Workload-Aware LLM Serving via Runtime Layer Swapping and KV Cache Resizing },
  author={ Zhaoyuan Su and Tingfeng Lan and Zirui Wang and Juncheng Yang and Yue Cheng },
  journal={arXiv preprint arXiv:2506.02006},
  year={ 2025 }
}
Comments on this paper