ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.02100
84
86
v1v2v3 (latest)

Social Learning in Multi Agent Multi Armed Bandits

4 October 2019
Abishek Sankararaman
A. Ganesh
Sanjay Shakkottai
ArXiv (abs)PDFHTML
Abstract

In this paper, we introduce a distributed version of the classical stochastic Multi-Arm Bandit (MAB) problem. Our setting consists of a large number of agents nnn that collaboratively and simultaneously solve the same instance of KKK armed MAB to minimize the average cumulative regret over all agents. The agents can communicate and collaborate among each other \emph{only} through a pairwise asynchronous gossip based protocol that exchange a limited number of bits. In our model, agents at each point decide on (i) which arm to play, (ii) whether to, and if so (iii) what and whom to communicate with. Agents in our model are decentralized, namely their actions only depend on their observed history in the past. We develop a novel algorithm in which agents, whenever they choose, communicate only arm-ids and not samples, with another agent chosen uniformly and independently at random. The per-agent regret scaling achieved by our algorithm is O(⌈Kn⌉+log⁡(n)Δlog⁡(T)+log⁡3(n)log⁡log⁡(n)Δ2)O \left( \frac{\lceil\frac{K}{n}\rceil+\log(n)}{\Delta} \log(T) + \frac{\log^3(n) \log \log(n)}{\Delta^2} \right)O(Δ⌈nK​⌉+log(n)​log(T)+Δ2log3(n)loglog(n)​). Furthermore, any agent in our algorithm communicates only a total of Θ(log⁡(T))\Theta(\log(T))Θ(log(T)) times over a time interval of TTT. We compare our results to two benchmarks - one where there is no communication among agents and one corresponding to complete interaction. We show both theoretically and empirically, that our algorithm experiences a significant reduction both in per-agent regret when compared to the case when agents do not collaborate and in communication complexity when compared to the full interaction setting which requires TTT communication attempts by an agent over TTT arm pulls. Our result thus demonstrates that even a minimal level of collaboration among the different agents enables a significant reduction in per-agent regret.

View on arXiv
Comments on this paper