104
0

Collaborative Min-Max Regret in Grouped Multi-Armed Bandits

Main:10 Pages
Bibliography:3 Pages
Appendix:19 Pages
Abstract

We study the impact of sharing exploration in multi-armed bandits in a grouped setting where a set of groups have overlapping feasible action sets [Baek and Farias '24]. In this grouped bandit setting, groups share reward observations, and the objective is to minimize the collaborative regret, defined as the maximum regret across groups. This naturally captures applications in which one aims to balance the exploration burden between groups or populations -- it is known that standard algorithms can lead to significantly imbalanced exploration cost between groups. We address this problem by introducing an algorithm Col-UCB that dynamically coordinates exploration across groups. We show that Col-UCB achieves both optimal minimax and instance-dependent collaborative regret up to logarithmic factors. These bounds are adaptive to the structure of shared action sets between groups, providing insights into when collaboration yields significant benefits over each group learning their best action independently.

View on arXiv
@article{blanchard2025_2506.10313,
  title={ Collaborative Min-Max Regret in Grouped Multi-Armed Bandits },
  author={ Moïse Blanchard and Vineet Goyal },
  journal={arXiv preprint arXiv:2506.10313},
  year={ 2025 }
}
Comments on this paper