ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1602.04741
9
144

Delay and Cooperation in Nonstochastic Bandits

15 February 2016
Nicolò Cesa-Bianchi
Claudio Gentile
Yishay Mansour
Alberto Minora
ArXivPDFHTML
Abstract

We study networks of communicating learning agents that cooperate to solve a common nonstochastic bandit problem. Agents use an underlying communication network to get messages about actions selected by other agents, and drop messages that took more than ddd hops to arrive, where ddd is a delay parameter. We introduce \textsc{Exp3-Coop}, a cooperative version of the {\sc Exp3} algorithm and prove that with KKK actions and NNN agents the average per-agent regret after TTT rounds is at most of order (d+1+KNα≤d)(Tln⁡K)\sqrt{\bigl(d+1 + \tfrac{K}{N}\alpha_{\le d}\bigr)(T\ln K)}(d+1+NK​α≤d​)(TlnK)​, where α≤d\alpha_{\le d}α≤d​ is the independence number of the ddd-th power of the connected communication graph GGG. We then show that for any connected graph, for d=Kd=\sqrt{K}d=K​ the regret bound is K1/4TK^{1/4}\sqrt{T}K1/4T​, strictly better than the minimax regret KT\sqrt{KT}KT​ for noncooperating agents. More informed choices of ddd lead to bounds which are arbitrarily close to the full information minimax regret Tln⁡K\sqrt{T\ln K}TlnK​ when GGG is dense. When GGG has sparse components, we show that a variant of \textsc{Exp3-Coop}, allowing agents to choose their parameters according to their centrality in GGG, strictly improves the regret. Finally, as a by-product of our analysis, we provide the first characterization of the minimax regret for bandit learning with delay.

View on arXiv
Comments on this paper