ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.13789
9
7

A Distributed Multi-GPU System for Large-Scale Node Embedding at Tencent

28 May 2020
Wanjing Wei
Yangzihao Wang
Ping Gao
Shijie Sun
Donghai Yu
    GNN
ArXivPDFHTML
Abstract

Real-world node embedding applications often contain hundreds of billions of edges with high-dimension node features. Scaling node embedding systems to efficiently support these applications remains a challenging problem. In this paper we present a high-performance multi-GPU node embedding system. It uses model parallelism to split node embeddings onto each GPU's local parameter server, and data parallelism to train these embeddings on different edge samples in parallel. We propose a hierarchical data partitioning strategy and an embedding training pipeline to optimize both communication and memory usage on a GPU cluster. With the decoupled design of CPU tasks (random walk) and GPU tasks (embedding training), our system is highly flexible and can fully utilize all computing resources on a GPU cluster. Comparing with the current state-of-the-art multi-GPU single-node embedding system, our system achieves 5.9x-14.4x speedup on average with competitive or better accuracy on open datasets. Using 40 NVIDIA V100 GPUs on a network with almost three hundred billion edges and more than one billion nodes, our implementation requires only 3 minutes to finish one training epoch.

View on arXiv
Comments on this paper