ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1811.01715
32
29

Multi-armed Bandits with Compensation

5 November 2018
Jeff Johnson
Longbo Huang
ArXivPDFHTML
Abstract

We propose and study the known-compensation multi-arm bandit (KCMAB) problem, where a system controller offers a set of arms to many short-term players for TTT steps. In each step, one short-term player arrives to the system. Upon arrival, the player aims to select an arm with the current best average reward and receives a stochastic reward associated with the arm. In order to incentivize players to explore other arms, the controller provides a proper payment compensation to players. The objective of the controller is to maximize the total reward collected by players while minimizing the compensation. We first provide a compensation lower bound Θ(∑iΔilog⁡TKLi)\Theta(\sum_i {\Delta_i\log T\over KL_i})Θ(∑i​KLi​Δi​logT​), where Δi\Delta_iΔi​ and KLiKL_iKLi​ are the expected reward gap and Kullback-Leibler (KL) divergence between distributions of arm iii and the best arm, respectively. We then analyze three algorithms to solve the KCMAB problem, and obtain their regrets and compensations. We show that the algorithms all achieve O(log⁡T)O(\log T)O(logT) regret and O(log⁡T)O(\log T)O(logT) compensation that match the theoretical lower bound. Finally, we present experimental results to demonstrate the performance of the algorithms.

View on arXiv
Comments on this paper