We introduce a method for sparsifying distributed algorithms and exhibit how it leads to improvements that go past known barriers in two algorithmic settings of large-scale graph processing: Massively Parallel Computation (MPC), and Local Computation Algorithms (LCA). - MPC with Strongly Sublinear Memory: Recently, there has been growing interest in obtaining MPC algorithms that are faster than their classic -round parallel counterparts for problems such as MIS, Maximal Matching, 2-Approximation of Minimum Vertex Cover, and -Approximation of Maximum Matching. Currently, all such MPC algorithms require memory per machine. Czumaj et al. [STOC'18] were the first to handle memory, running in rounds. We obtain -round MPC algorithms for all these four problems that work even when each machine has memory for any constant . Here, denotes the maximum degree. These are the first sublogarithmic-time algorithms for these problems that break the linear memory barrier. - LCAs with Query Complexity Below the Parnas-Ron Paradigm: Currently, the best known LCA for MIS has query complexity , by Ghaffari [SODA'16]. As pointed out by Rubinfeld, obtaining a query complexity of remains a central open question. Ghaffari's bound almost reaches a barrier common to all known MIS LCAs, which simulate distributed algorithms by learning the local topology, \`{a} la Parnas-Ron [TCS'07]. This barrier follows from the distributed lower bound of Kuhn, et al. [JACM'16]. We break this barrier and obtain an MIS LCA with query complexity .
View on arXiv