We consider the problem of pure exploration with subset-wise preference feedback, which contains arms with features. The learner is allowed to query subsets of size and receives feedback in the form of a noisy winner. The goal of the learner is to identify the best arm efficiently using as few queries as possible. This setting is relevant in various online decision-making scenarios involving human feedback such as online retailing, streaming services, news feed, and online advertising; since it is easier and more reliable for people to choose a preferred item from a subset than to assign a likability score to an item in isolation. To the best of our knowledge, this is the first work that considers the subset-wise preference feedback model in a structured setting, which allows for potentially infinite set of arms. We present two algorithms that guarantee the detection of the best-arm in samples with probability at least , where is the dimension of the arm-features and is the appropriate notion of utility gap among the arms. We also derive an instance-dependent lower bound of which matches our upper bound on a worst-case instance. Finally, we run extensive experiments to corroborate our theoretical findings, and observe that our adaptive algorithm stops and requires up to 12x fewer samples than a non-adaptive algorithm.
View on arXiv