65
91

Tighter Regret Bounds for Influence Maximization and Other Combinatorial Semi-Bandits with Probabilistically Triggered Arms

Abstract

We study combinatorial multi-armed bandit with probabilistically triggered arms (CMAB-T) and semi-bandit feedback. We resolve a serious issue in the prior CMAB-T studies where the regret bounds contain a possibly exponentially large factor of 1/p1/p^*, where pp^* is the minimum positive probability that an arm is triggered by any action. We address this issue by introducing triggering probability moderated (TPM) bounded smoothness conditions into the general CMAB-T framework, and show that many applications such as influence maximization bandit and combinatorial cascading bandit satisfy such TPM conditions. As a result, we completely remove the factor of 1/p1/p^* from the regret bounds, achieving significantly better regret bounds for influence maximization and cascading bandits than before. Finally, we provide lower bound results showing that the factor 1/p1/p^* is unavoidable for general CMAB-T problems, suggesting that TPM conditions are crucial in removing this factor.

View on arXiv
Comments on this paper