We study contextual combinatorial bandits with probabilistically triggered arms (CMAB-T) under a variety of smoothness conditions that capture a wide range of applications, such as contextual cascading bandits and contextual influence maximization bandits. Under the triggering probability modulated (TPM) condition, we devise the C-UCB-T algorithm and propose a novel analysis that achieves an regret bound, removing a potentially exponentially large factor , where is the dimension of contexts, is the minimum positive probability that any arm can be triggered, and batch-size is the maximum number of arms that can be triggered per round. Under the variance modulated (VM) or triggering probability and variance modulated (TPVM) conditions, we propose a new variance-adaptive algorithm VAC-UCB and derive a regret bound , which is independent of the batch-size . As a valuable by-product, our analysis technique and variance-adaptive algorithm can be applied to the CMAB-T and CMAB setting, improving existing results there as well. We also include experiments that demonstrate the improved performance of our algorithms compared with benchmark algorithms on synthetic and real-world datasets.
View on arXiv