Faster Information Dissemination in Dynamic Networks via Network Coding

We use network coding to improve the speed of distributed computation in the dynamic network model of Kuhn, Lynch and Oshman [STOC '10]. In this model an adversary adaptively chooses a new network topology in every round, making even basic distributed computations challenging. Kuhn et al. show that n nodes, each starting with a d-bit token, can broadcast them to all nodes in time O(n^2) using b-bit messages, where b > d + log n. Their algorithms take the natural approach of {token forwarding}: in every round each node broadcasts some particular token it knows. They prove matching Omega(n^2) lower bounds for a natural class of token forwarding algorithms and an Omega(n log n) lower bound that applies to all token-forwarding algorithms. We use network coding, transmitting random linear combinations of tokens, to break both lower bounds. Our algorithm's performance is quadratic in the message size b, broadcasting the n tokens in roughly d/b^2 * n^2 rounds. For b = d = O(log n) our algorithms use O(n^2/log n) rounds, breaking the first lower bound, while for larger message sizes we obtain linear-time algorithms. We also consider networks that change only every T rounds, and achieve an additional factor T^2 speedup. This contrasts with related lower and upper bounds of Kuhn et al. implying that for natural token-forwarding algorithms a speedup of T, but not more, can be obtained. Lastly, we give a general way to derandomize random linear network coding, that also leads to new deterministic information dissemination algorithms.
View on arXiv