ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2112.11684
  4. Cited By
HP-GNN: Generating High Throughput GNN Training Implementation on
  CPU-FPGA Heterogeneous Platform

HP-GNN: Generating High Throughput GNN Training Implementation on CPU-FPGA Heterogeneous Platform

22 December 2021
Yi-Chien Lin
Bingyi Zhang
Viktor Prasanna
    GNN
ArXivPDFHTML

Papers citing "HP-GNN: Generating High Throughput GNN Training Implementation on CPU-FPGA Heterogeneous Platform"

6 / 6 papers shown
Title
Efficient Message Passing Architecture for GCN Training on HBM-based
  FPGAs with Orthogonal Topology On-Chip Networks
Efficient Message Passing Architecture for GCN Training on HBM-based FPGAs with Orthogonal Topology On-Chip Networks
Qizhe Wu
Letian Zhao
Yuchen Gui
Huawen Liang Xiaotian Wang
GNN
29
0
0
06 Nov 2024
GNNBuilder: An Automated Framework for Generic Graph Neural Network
  Accelerator Generation, Simulation, and Optimization
GNNBuilder: An Automated Framework for Generic Graph Neural Network Accelerator Generation, Simulation, and Optimization
Stefan Abi-Karam
Cong Hao
GNN
30
7
0
29 Mar 2023
HitGNN: High-throughput GNN Training Framework on CPU+Multi-FPGA
  Heterogeneous Platform
HitGNN: High-throughput GNN Training Framework on CPU+Multi-FPGA Heterogeneous Platform
Yi-Chien Lin
Bingyi Zhang
Viktor Prasanna
GNN
26
5
0
02 Mar 2023
GraphAGILE: An FPGA-based Overlay Accelerator for Low-latency GNN
  Inference
GraphAGILE: An FPGA-based Overlay Accelerator for Low-latency GNN Inference
Bingyi Zhang
Hanqing Zeng
Viktor Prasanna
GNN
26
15
0
02 Feb 2023
Accurate, Low-latency, Efficient SAR Automatic Target Recognition on
  FPGA
Accurate, Low-latency, Efficient SAR Automatic Target Recognition on FPGA
Bingyi Zhang
R. Kannan
Viktor Prasanna
Carl E. Busart
27
14
0
04 Jan 2023
Low-latency Mini-batch GNN Inference on CPU-FPGA Heterogeneous Platform
Low-latency Mini-batch GNN Inference on CPU-FPGA Heterogeneous Platform
Bingyi Zhang
Hanqing Zeng
Viktor Prasanna
GNN
24
12
0
17 Jun 2022
1