We consider the problem of model selection for two popular stochastic linear bandit settings, and propose algorithms that adapts to the unknown problem complexity. In the first setting, we consider the armed mixture bandits, where the mean reward of arm , is , with being the known context vector and and are unknown parameters. We define as the problem complexity and consider a sequence of nested hypothesis classes, each positing a different upper bound on . Exploiting this, we propose Adaptive Linear Bandit (ALB), a novel phase based algorithm that adapts to the true problem complexity, . We show that ALB achieves regret scaling of , where is apriori unknown. As a corollary, when , ALB recovers the minimax regret for the simple bandit algorithm without such knowledge of . ALB is the first algorithm that uses parameter norm as model section criteria for linear bandits. Prior state of art algorithms \cite{osom} achieve a regret of , where is the upper bound on , fed as an input to the problem. In the second setting, we consider the standard linear bandit problem (with possibly an infinite number of arms) where the sparsity of , denoted by , is unknown to the algorithm. Defining as the problem complexity, we show that ALB achieves regret, matching that of an oracle who knew the true sparsity level. This methodology is then extended to the case of finitely many arms and similar results are proven. This is the first algorithm that achieves such model selection guarantees. We further verify our results via synthetic and real-data experiments.
View on arXiv