SLED: A Speculative LLM Decoding Framework for Efficient Edge Serving

The growing gap between the increasing complexity of large language models (LLMs) and the limited computational budgets of edge devices poses a key challenge for efficient on-device inference, despite gradual improvements in hardware capabilities. Existing strategies, such as aggressive quantization, pruning, or remote inference, trade accuracy for efficiency or lead to substantial cost burdens. This position paper introduces a new framework that leverages speculative decoding, previously viewed primarily as a decoding acceleration technique for autoregressive generation of LLMs, as a promising approach specifically adapted for edge computing by orchestrating computation across heterogeneous devices. We propose \acronym, a framework that allows lightweight edge devices to draft multiple candidate tokens locally using diverse draft models, while a single, shared edge server verifies the tokens utilizing a more precise target model. To further increase the efficiency of verification, the edge server batch the diverse verification requests from devices. This approach supports device heterogeneity and reduces server-side memory footprint by sharing the same upstream target model across multiple devices. Our initial experiments with Jetson Orin Nano, Raspberry Pi 4B/5, and an edge server equipped with 4 Nvidia A100 GPUs indicate substantial benefits: 2.2 more system throughput, 2.8 more system capacity, and better cost efficiency, all without sacrificing model accuracy.
View on arXiv@article{li2025_2506.09397, title={ SLED: A Speculative LLM Decoding Framework for Efficient Edge Serving }, author={ Xiangchen Li and Dimitrios Spatharakis and Saeid Ghafouri and Jiakun Fan and Hans Vandierendonck and Deepu John and Bo Ji and Dimitrios Nikolopoulos }, journal={arXiv preprint arXiv:2506.09397}, year={ 2025 } }