ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1605.07162
25
49

Pure Exploration of Multi-armed Bandit Under Matroid Constraints

23 May 2016
Lijie Chen
Anupam Gupta
Jian Li
ArXivPDFHTML
Abstract

We study the pure exploration problem subject to a matroid constraint (Best-Basis) in a stochastic multi-armed bandit game. In a Best-Basis instance, we are given nnn stochastic arms with unknown reward distributions, as well as a matroid M\mathcal{M}M over the arms. Let the weight of an arm be the mean of its reward distribution. Our goal is to identify a basis of M\mathcal{M}M with the maximum total weight, using as few samples as possible. The problem is a significant generalization of the best arm identification problem and the top-kkk arm identification problem, which have attracted significant attentions in recent years. We study both the exact and PAC versions of Best-Basis, and provide algorithms with nearly-optimal sample complexities for these versions. Our results generalize and/or improve on several previous results for the top-kkk arm identification problem and the combinatorial pure exploration problem when the combinatorial constraint is a matroid.

View on arXiv
Comments on this paper