We study networks of communicating learning agents that cooperate to solve a common nonstochastic bandit problem. Agents use an underlying communication network to get messages about actions selected by other agents, and drop messages that took more than hops to arrive, where is a delay parameter. We introduce \textsc{Exp3-Coop}, a cooperative version of the {\sc Exp3} algorithm and prove that with actions and agents the average per-agent regret after rounds is at most of order , where is the independence number of the -th power of the connected communication graph . We then show that for any connected graph, for the regret bound is , strictly better than the minimax regret for noncooperating agents. More informed choices of lead to bounds which are arbitrarily close to the full information minimax regret when is dense. When has sparse components, we show that a variant of \textsc{Exp3-Coop}, allowing agents to choose their parameters according to their centrality in , strictly improves the regret. Finally, as a by-product of our analysis, we provide the first characterization of the minimax regret for bandit learning with delay.
View on arXiv