ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1710.06360
45
43
v1v2 (latest)

Good Arm Identification via Bandit Feedback

17 October 2017
H. Kano
Junya Honda
Kentaro Sakamaki
Kentaro Matsuura
Atsuyoshi Nakamura
Masashi Sugiyama
ArXiv (abs)PDFHTML
Abstract

In this paper, we consider and discuss a new stochastic multi-armed bandit problem called {\em good arm identification} (GAI), where a good arm is an arm with expected reward greater than or equal to a given threshold. GAI is a pure-exploration problem that an agent repeats a process of outputting an arm as soon as it is identified as a good one before confirming the other arms are actually not good. The objective of GAI is to minimize the number of samples for each process. We find that GAI faces a new kind of dilemma, the {\em exploration-exploitation dilemma of confidence}, while best arm identification does not. Therefore, GAI is not just an extension of the best arm identification. Actually, an efficient design of algorithms for GAI is quite different from that for best arm identification. We derive a lower bound on the sample complexity for GAI and develop an algorithm whose sample complexity almost matches the lower bound. We also confirm experimentally that the proposed algorithm outperforms a naive algorithm and a thresholding-bandit-like algorithm in synthetic settings and in settings based on medical data.

View on arXiv
Comments on this paper