Stochastic Top- Subset Bandits with Linear Space and Non-Linear Feedback

Many real-world problems like Social Influence Maximization face the dilemma of choosing the best out of options at a given time instant. This setup can be modeled as a combinatorial bandit which chooses out of arms at each time, with an aim to achieve an efficient trade-off between exploration and exploitation. This is the first work for combinatorial bandits where the feedback received can be a non-linear function of the chosen arms. The direct use of multi-armed bandit requires choosing among -choose- options making the state space large. In this paper, we present a novel algorithm which is computationally efficient and the storage is linear in . The proposed algorithm is a divide-and-conquer based strategy, that we call CMAB-SM. Further, the proposed algorithm achieves a \textit{regret bound} of for a time horizon , which is \textit{sub-linear} in all parameters , , and . %When applied to the problem of Social Influence Maximization, the performance of the proposed algorithm surpasses the UCB algorithm and some more sophisticated domain-specific methods.
View on arXiv