ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.17052
61
0

SpecEdge: Scalable Edge-Assisted Serving Framework for Interactive LLMs

16 May 2025
Jinwoo Park
Seunggeun Cho
Dongsu Han
ArXiv (abs)PDFHTML
Main:8 Pages
17 Figures
Bibliography:5 Pages
5 Tables
Appendix:4 Pages
Abstract

Large language models (LLMs) power many modern applications, but serving them at scale remains costly and resource-intensive. Current server-centric systems overlook consumer-grade GPUs at the edge. We introduce SpecEdge, an edge-assisted inference framework that splits LLM workloads between edge and server GPUs using a speculative decoding scheme, exchanging only token outputs over the network. SpecEdge employs proactive edge drafting to overlap edge token creation with server verification and pipeline-aware scheduling that interleaves multiple user requests to increase server-side throughput. Experiments show SpecEdge enhances overall cost efficiency by 1.91x through achieving 2.22x server throughput, and reduces inter token latency by 11.24% compared to a server-only baseline, introducing a scalable, cost-effective paradigm for LLM serving.

View on arXiv
@article{park2025_2505.17052,
  title={ SpecEdge: Scalable Edge-Assisted Serving Framework for Interactive LLMs },
  author={ Jinwoo Park and Seunggeun Cho and Dongsu Han },
  journal={arXiv preprint arXiv:2505.17052},
  year={ 2025 }
}
Comments on this paper