Adversarial Dueling Bandits

We introduce the problem of regret minimization in Adversarial Dueling Bandits. As in classic Dueling Bandits, the learner has to repeatedly choose a pair of items and observe only a relative binary `win-loss' feedback for this pair, but here this feedback is generated from an arbitrary preference matrix, possibly chosen adversarially. Our main result is an algorithm whose -round regret compared to the \emph{Borda-winner} from a set of items is , as well as a matching lower bound. We also prove a similar high probability regret bound. We further consider a simpler \emph{fixed-gap} adversarial setup, which bridges between two extreme preference feedback models for dueling bandits: stationary preferences and an arbitrary sequence of preferences. For the fixed-gap adversarial setup we give an regret algorithm, where is the gap in Borda scores between the best item and all other items, and show a lower bound of indicating that our dependence on the main problem parameters and is tight (up to logarithmic factors).
View on arXiv