ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.13866
  4. Cited By
Minimal Variance Sampling with Provable Guarantees for Fast Training of
  Graph Neural Networks

Minimal Variance Sampling with Provable Guarantees for Fast Training of Graph Neural Networks

24 June 2020
Weilin Cong
R. Forsati
M. Kandemir
M. Mahdavi
ArXivPDFHTML

Papers citing "Minimal Variance Sampling with Provable Guarantees for Fast Training of Graph Neural Networks"

10 / 10 papers shown
Title
LazyGNN: Large-Scale Graph Neural Networks via Lazy Propagation
LazyGNN: Large-Scale Graph Neural Networks via Lazy Propagation
Rui Xue
Haoyu Han
MohamadAli Torkamani
Jian Pei
Xiaorui Liu
GNN
31
19
0
03 Feb 2023
Distributed Graph Neural Network Training: A Survey
Distributed Graph Neural Network Training: A Survey
Yingxia Shao
Hongzheng Li
Xizhi Gu
Hongbo Yin
Yawen Li
Xupeng Miao
Wentao Zhang
Bin Cui
Lei Chen
GNN
AI4CE
11
56
0
01 Nov 2022
RSC: Accelerating Graph Neural Networks Training via Randomized Sparse
  Computations
RSC: Accelerating Graph Neural Networks Training via Randomized Sparse Computations
Zirui Liu
Sheng-Wei Chen
Kaixiong Zhou
Daochen Zha
Xiao Huang
Xia Hu
32
15
0
19 Oct 2022
A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
  Rethinking
A Comprehensive Study on Large-Scale Graph Training: Benchmarking and Rethinking
Keyu Duan
Zirui Liu
Peihao Wang
Wenqing Zheng
Kaixiong Zhou
Tianlong Chen
Xia Hu
Zhangyang Wang
GNN
41
57
0
14 Oct 2022
BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks
  with Partition-Parallelism and Random Boundary Node Sampling
BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Partition-Parallelism and Random Boundary Node Sampling
Cheng Wan
Youjie Li
Ang Li
Namjae Kim
Yingyan Lin
GNN
34
75
0
21 Mar 2022
Decoupling the Depth and Scope of Graph Neural Networks
Decoupling the Depth and Scope of Graph Neural Networks
Hanqing Zeng
Muhan Zhang
Yinglong Xia
Ajitesh Srivastava
Andrey Malevich
Rajgopal Kannan
Viktor Prasanna
Long Jin
Ren Chen
GNN
33
142
0
19 Jan 2022
On Provable Benefits of Depth in Training Graph Convolutional Networks
On Provable Benefits of Depth in Training Graph Convolutional Networks
Weilin Cong
M. Ramezani
M. Mahdavi
27
73
0
28 Oct 2021
IGLU: Efficient GCN Training via Lazy Updates
IGLU: Efficient GCN Training via Lazy Updates
S. Narayanan
Aditya Sinha
Prateek Jain
Purushottam Kar
Sundararajan Sellamanickam
BDL
52
10
0
28 Sep 2021
Scalable Graph Neural Network Training: The Case for Sampling
Scalable Graph Neural Network Training: The Case for Sampling
Marco Serafini
Hui Guan
GNN
41
23
0
05 May 2021
Sampling methods for efficient training of graph convolutional networks:
  A survey
Sampling methods for efficient training of graph convolutional networks: A survey
Xin Liu
Yurui Lai
Lei Deng
Guoqi Li
Xiaochun Ye
Xiaochun Ye
GNN
29
100
0
10 Mar 2021
1