ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.08092
29
17

GPOP: A cache- and work-efficient framework for Graph Processing Over Partitions

21 June 2018
Kartik Lakhotia
Sourav Pati
Rajgopal Kannan
Viktor Prasanna
ArXivPDFHTML
Abstract

Past decade has seen the development of many shared-memory graph processing frameworks, intended to reduce the effort of developing high performance parallel applications. However many of these frameworks, based on Vertex-centric or Edge-centric paradigms suffer from several issues, such as poor cache utilization, irregular memory accesses, heavy use of synchronization primitives and theoretical inefficiency, that deteriorate overall performance and scalability. Recently, we proposed a cache and memory efficient partition-centric paradigm for computing PageRank. In this paper, we generalize this approach to develop a novel Graph Processing Over Partitions (GPOP) framework that is cache-efficient, scalable and work-efficient. GPOP induces locality in memory accesses by increasing granularity of execution to vertex subsets called 'partitions', thereby dramatically improving the cache performance of a variety of graph algorithms. It achieves high scalability by enabling completely lock and atomic free computation. GPOP's built-in analytical performance model enables it to use a hybrid of source and partitioncentric communication modes in a way that ensures work-efficiency each iteration, while simultaneously boosting high bandwidth sequential memory accesses. We extensively evaluate the performance of GPOP for a variety of graph algorithms, using several large datasets. We observe that GPOP incurs up to 9x, 6.8x and 5.5x less L2 cache misses compared to Ligra, GraphMat and Galois, respectively. In terms of execution time, GPOP is upto 19x, 9.3x and 3.6x faster than Ligra, GraphMat and Galois respectively.

View on arXiv
Comments on this paper