ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.03376
14
1

Blocked Collaborative Bandits: Online Collaborative Filtering with Per-Item Budget Constraints

31 October 2023
S. Pal
A. Suggala
Karthikeyan Shanmugam
Prateek Jain
ArXivPDFHTML
Abstract

We consider the problem of \emph{blocked} collaborative bandits where there are multiple users, each with an associated multi-armed bandit problem. These users are grouped into \emph{latent} clusters such that the mean reward vectors of users within the same cluster are identical. Our goal is to design algorithms that maximize the cumulative reward accrued by all the users over time, under the \emph{constraint} that no arm of a user is pulled more than B\mathsf{B}B times. This problem has been originally considered by \cite{Bresler:2014}, and designing regret-optimal algorithms for it has since remained an open problem. In this work, we propose an algorithm called \texttt{B-LATTICE} (Blocked Latent bAndiTs via maTrIx ComplEtion) that collaborates across users, while simultaneously satisfying the budget constraints, to maximize their cumulative rewards. Theoretically, under certain reasonable assumptions on the latent structure, with M\mathsf{M}M users, N\mathsf{N}N arms, T\mathsf{T}T rounds per user, and C=O(1)\mathsf{C}=O(1)C=O(1) latent clusters, \texttt{B-LATTICE} achieves a per-user regret of O~(T(1+NM−1)\widetilde{O}(\sqrt{\mathsf{T}(1 + \mathsf{N}\mathsf{M}^{-1})}O(T(1+NM−1)​ under a budget constraint of B=Θ(log⁡T)\mathsf{B}=\Theta(\log \mathsf{T})B=Θ(logT). These are the first sub-linear regret bounds for this problem, and match the minimax regret bounds when B=T\mathsf{B}=\mathsf{T}B=T. Empirically, we demonstrate that our algorithm has superior performance over baselines even when B=1\mathsf{B}=1B=1. \texttt{B-LATTICE} runs in phases where in each phase it clusters users into groups and collaborates across users within a group to quickly learn their reward models.

View on arXiv
Comments on this paper