ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.06208
44
0

Distributed Graph Neural Network Inference With Just-In-Time Compilation For Industry-Scale Graphs

8 March 2025
Xiabao Wu
Yongchao Liu
Wei Qin
Chuntao Hong
    GNN
ArXivPDFHTML
Abstract

Graph neural networks (GNNs) have delivered remarkable results in various fields. However, the rapid increase in the scale of graph data has introduced significant performance bottlenecks for GNN inference. Both computational complexity and memory usage have risen dramatically, with memory becoming a critical limitation. Although graph sampling-based subgraph learning methods can help mitigate computational and memory demands, they come with drawbacks such as information loss and high redundant computation among subgraphs. This paper introduces an innovative processing paradgim for distributed graph learning that abstracts GNNs with a new set of programming interfaces and leverages Just-In-Time (JIT) compilation technology to its full potential. This paradigm enables GNNs to highly exploit the computational resources of distributed clusters by eliminating the drawbacks of subgraph learning methods, leading to a more efficient inference process. Our experimental results demonstrate that on industry-scale graphs of up to \textbf{500 million nodes and 22.4 billion edges}, our method can produce a performance boost of up to \textbf{27.4 times}.

View on arXiv
@article{wu2025_2503.06208,
  title={ Distributed Graph Neural Network Inference With Just-In-Time Compilation For Industry-Scale Graphs },
  author={ Xiabao Wu and Yongchao Liu and Wei Qin and Chuntao Hong },
  journal={arXiv preprint arXiv:2503.06208},
  year={ 2025 }
}
Comments on this paper