4
0

Preference-centric Bandits: Optimality of Mixtures and Regret-efficient Algorithms

Meltem Tatlı
Arpan Mukherjee
Prashanth L.A.
Karthikeyan Shanmugam
Ali Tajer
Abstract

The objective of canonical multi-armed bandits is to identify and repeatedly select an arm with the largest reward, often in the form of the expected value of the arm's probability distribution. Such a utilitarian perspective and focus on the probability models' first moments, however, is agnostic to the distributions' tail behavior and their implications for variability and risks in decision-making. This paper introduces a principled framework for shifting from expectation-based evaluation to an alternative reward formulation, termed a preference metric (PM). The PMs can place the desired emphasis on different reward realization and can encode a richer modeling of preferences that incorporate risk aversion, robustness, or other desired attitudes toward uncertainty. A fundamentally distinct observation in such a PM-centric perspective is that designing bandit algorithms will have a significantly different principle: as opposed to the reward-based models in which the optimal sampling policy converges to repeatedly sampling from the single best arm, in the PM-centric framework the optimal policy converges to selecting a mix of arms based on specific mixing weights. Designing such mixture policies departs from the principles for designing bandit algorithms in significant ways, primarily because of uncountable mixture possibilities. The paper formalizes the PM-centric framework and presents two algorithm classes (horizon-dependent and anytime) that learn and track mixtures in a regret-efficient fashion. These algorithms have two distinctions from their canonical counterparts: (i) they involve an estimation routine to form reliable estimates of optimal mixtures, and (ii) they are equipped with tracking mechanisms to navigate arm selection fractions to track the optimal mixtures. These algorithms' regret guarantees are investigated under various algebraic forms of the PMs.

View on arXiv
@article{tatlı2025_2504.20877,
  title={ Preference-centric Bandits: Optimality of Mixtures and Regret-efficient Algorithms },
  author={ Meltem Tatlı and Arpan Mukherjee and Prashanth L.A. and Karthikeyan Shanmugam and Ali Tajer },
  journal={arXiv preprint arXiv:2504.20877},
  year={ 2025 }
}
Comments on this paper