ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.14895
94
19

Pipeline Parallelism for Inference on Heterogeneous Edge Computing

28 October 2021
Yang Hu
Connor Imes
Xuanang Zhao
Souvik Kundu
P. Beerel
S. Crago
J. Walters
    MoE
ArXivPDFHTML
Abstract

Deep neural networks with large model sizes achieve state-of-the-art results for tasks in computer vision (CV) and natural language processing (NLP). However, these large-scale models are too compute- or memory-intensive for resource-constrained edge devices. Prior works on parallel and distributed execution primarily focus on training -- rather than inference -- using homogeneous accelerators in data centers. We propose EdgePipe, a distributed framework for edge systems that uses pipeline parallelism to both speed up inference and enable running larger (and more accurate) models that otherwise cannot fit on single edge devices. EdgePipe achieves these results by using an optimal partition strategy that considers heterogeneity in compute, memory, and network bandwidth. Our empirical evaluation demonstrates that EdgePipe achieves 10.59×10.59\times10.59× and 11.88×11.88\times11.88× speedup using 16 edge devices for the ViT-Large and ViT-Huge models, respectively, with no accuracy loss. Similarly, EdgePipe improves ViT-Huge throughput by 3.93×3.93\times3.93× over a 4-node baseline using 16 edge devices, which independently cannot fit the model in memory. Finally, we show up to 4.16×4.16\times4.16× throughput improvement over the state-of-the-art PipeDream when using a heterogeneous set of devices.

View on arXiv
Comments on this paper