ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.07761
31
3

Multi-Fidelity Multi-Armed Bandits Revisited

13 June 2023
Xuchuang Wang
Qingyun Wu
Wei Chen
John C. S. Lui
ArXivPDFHTML
Abstract

We study the multi-fidelity multi-armed bandit (MF-MAB), an extension of the canonical multi-armed bandit (MAB) problem. MF-MAB allows each arm to be pulled with different costs (fidelities) and observation accuracy. We study both the best arm identification with fixed confidence (BAI) and the regret minimization objectives. For BAI, we present (a) a cost complexity lower bound, (b) an algorithmic framework with two alternative fidelity selection procedures, and (c) both procedures' cost complexity upper bounds. From both cost complexity bounds of MF-MAB, one can recover the standard sample complexity bounds of the classic (single-fidelity) MAB. For regret minimization of MF-MAB, we propose a new regret definition, prove its problem-independent regret lower bound Ω(K1/3Λ2/3)\Omega(K^{1/3}\Lambda^{2/3})Ω(K1/3Λ2/3) and problem-dependent lower bound Ω(Klog⁡Λ)\Omega(K\log \Lambda)Ω(KlogΛ), where KKK is the number of arms and Λ\LambdaΛ is the decision budget in terms of cost, and devise an elimination-based algorithm whose worst-cost regret upper bound matches its corresponding lower bound up to some logarithmic terms and, whose problem-dependent bound matches its corresponding lower bound in terms of Λ\LambdaΛ.

View on arXiv
Comments on this paper