We propose a bandit algorithm that explores by randomizing its history of observations. The algorithm estimates the value of the arm from a non-parametric bootstrap sample of its history, which is augmented with pseudo observations. Our novel design of pseudo observations guarantees that the bootstrap estimates are optimistic with a high probability. We call our algorithm Giro, which stands for garbage in, reward out. We analyze Giro in a Bernoulli bandit and prove a bound on its -round regret, where is the number of arms and is the difference in the expected rewards of the optimal and the best suboptimal arms. The key advantage of our exploration design is that it can be easily applied to structured problems. To show this, we propose contextual Giro with an arbitrary non-linear reward generalization model. We evaluate Giro and its contextual variant on multiple synthetic and real-world problems, and observe that Giro performs well.
View on arXiv