107
8
v1v2 (latest)

Near-linear Time Dispersion of Mobile Agents

Abstract

Consider that there are knk\le n agents in a simple, connected, and undirected graph G=(V,E)G=(V,E) with nn nodes and mm edges. The goal of the dispersion problem is to move these kk agents to distinct nodes. Agents can communicate only when they are at the same node, and no other means of communication such as whiteboards are available. We assume that the agents operate synchronously. We consider two scenarios: when all agents are initially located at any single node (rooted setting) and when they are initially distributed over any one or more nodes (general setting). Kshemkalyani and Sharma presented a dispersion algorithm for the general setting, which uses O(mk)O(m_k) time and log(k+δ)\log(k+\delta) bits of memory per agent [OPODIS 2021]. Here, mkm_k is the maximum number of edges in any induced subgraph of GG with kk nodes, and δ\delta is the maximum degree of GG. This algorithm is the fastest in the literature, as no algorithm with o(mk)o(m_k) time has been discovered even for the rooted setting. In this paper, we present faster algorithms for both the rooted and general settings. First, we present an algorithm for the rooted setting that solves the dispersion problem in O(klogmin(k,δ))=O(klogk)O(k\log \min(k,\delta))=O(k\log k) time using O(logδ)O(\log \delta) bits of memory per agent. Next, we propose an algorithm for the general setting that achieves dispersion in O(k(logk)(logmin(k,δ))=O(klog2k)O(k (\log k)\cdot (\log \min(k,\delta))=O(k \log^2 k) time using O(log(k+δ))O(\log (k+\delta)) bits. Finally, for the rooted setting, we give a time-optimal, i.e.,O(k)O(k)-time, algorithm with O(δ)O(\delta) bits of space per agent.

View on arXiv
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.