We study the pure exploration problem subject to a matroid constraint (Best-Basis) in a stochastic multi-armed bandit game. In a Best-Basis instance, we are given stochastic arms with unknown reward distributions, as well as a matroid over the arms. Let the weight of an arm be the mean of its reward distribution. Our goal is to identify a basis of with the maximum total weight, using as few samples as possible. The problem is a significant generalization of the best arm identification problem and the top- arm identification problem, which have attracted significant attentions in recent years. We study both the exact and PAC versions of Best-Basis, and provide algorithms with nearly-optimal sample complexities for these versions. Our results generalize and/or improve on several previous results for the top- arm identification problem and the combinatorial pure exploration problem when the combinatorial constraint is a matroid.
View on arXiv