Adversarial Multi-dueling Bandits

We introduce the problem of regret minimization in adversarial multi-dueling bandits. While adversarial preferences have been studied in dueling bandits, they have not been explored in multi-dueling bandits. In this setting, the learner is required to select arms at each round and observes as feedback the identity of the most preferred arm which is based on an arbitrary preference matrix chosen obliviously. We introduce a novel algorithm, MiDEX (Multi Dueling EXP3), to learn from such preference feedback that is assumed to be generated from a pairwise-subset choice model. We prove that the expected cumulative -round regret of MiDEX compared to a Borda-winner from a set of arms is upper bounded by . Moreover, we prove a lower bound of for the expected regret in this setting which demonstrates that our proposed algorithm is near-optimal.
View on arXiv