0
0

Does Feedback Help in Bandits with Arm Erasures?

Merve Karakas
Osama Hanna
Lin F. Yang
Christina Fragouli
Abstract

We study a distributed multi-armed bandit (MAB) problem over arm erasure channels, motivated by the increasing adoption of MAB algorithms over communication-constrained networks. In this setup, the learner communicates the chosen arm to play to an agent over an erasure channel with probability ϵ[0,1)\epsilon \in [0,1); if an erasure occurs, the agent continues pulling the last successfully received arm; the learner always observes the reward of the arm pulled. In past work, we considered the case where the agent cannot convey feedback to the learner, and thus the learner does not know whether the arm played is the requested or the last successfully received one. In this paper, we instead consider the case where the agent can send feedback to the learner on whether the arm request was received, and thus the learner exactly knows which arm was played. Surprisingly, we prove that erasure feedback does not improve the worst-case regret upper bound order over the previously studied no-feedback setting. In particular, we prove a regret lower bound of Ω(KT+K/(1ϵ))\Omega(\sqrt{KT} + K / (1 - \epsilon)), where KK is the number of arms and TT the time horizon, that matches no-feedback upper bounds up to logarithmic factors. We note however that the availability of feedback enables simpler algorithm designs that may achieve better constants (albeit not better order) regret bounds; we design one such algorithm and evaluate its performance numerically.

View on arXiv
@article{karakas2025_2504.20894,
  title={ Does Feedback Help in Bandits with Arm Erasures? },
  author={ Merve Karakas and Osama Hanna and Lin F. Yang and Christina Fragouli },
  journal={arXiv preprint arXiv:2504.20894},
  year={ 2025 }
}
Comments on this paper