ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.17408
36
1

AdaptGear: Accelerating GNN Training via Adaptive Subgraph-Level Kernels on GPUs

27 May 2023
Yangjie Zhou
Yaoxu Song
Jingwen Leng
Zihan Liu
Weihao Cui
Zhendong Zhang
Cong Guo
Quan Chen
Li-Wei Li
Minyi Guo
    GNN
ArXivPDFHTML
Abstract

Graph neural networks (GNNs) are powerful tools for exploring and learning from graph structures and features. As such, achieving high-performance execution for GNNs becomes crucially important. Prior works have proposed to explore the sparsity (i.e., low density) in the input graph to accelerate GNNs, which uses the full-graph-level or block-level sparsity format. We show that they fail to balance the sparsity benefit and kernel execution efficiency. In this paper, we propose a novel system, referred to as AdaptGear, that addresses the challenge of optimizing GNNs performance by leveraging kernels tailored to the density characteristics at the subgraph level. Meanwhile, we also propose a method that dynamically chooses the optimal set of kernels for a given input graph. Our evaluation shows that AdaptGear can achieve a significant performance improvement, up to 6.49×6.49 \times6.49× (1.87×1.87 \times1.87× on average), over the state-of-the-art works on two mainstream NVIDIA GPUs across various datasets.

View on arXiv
Comments on this paper