To address the contextual bandit problem, we propose online decision tree algorithms. The analysis of proposed algorithms is based on the sample complexity needed to find the optimal decision stump. Then, the decision stumps are assembled in a decision tree, Bandit Tree, and in a random collection of decision trees, Bandit Forest. We show that the proposed algorithms are optimal up to a logarithmic factor. The dependence of the sample complexity upon the number of contextual variables is logarithmic. The computational cost of the proposed algorithms with respect to the time horizon is linear. These analytical results allow the proposed algorithms to be efficient in real applications, where the number of events to process is huge, and where we expect that some contextual variables, chosen from a large set, have potentially non-linear dependencies with the rewards. When Bandit Trees are assembled into a Bandit Forest, the analysis is done against a strong reference, the Random Forest built knowing the joint distribution of contexts and rewards. We show that the expected dependent regret bound against this strong reference is logarithmic with respect to the time horizon. In the experiments done to illustrate the theoretical analysis, Bandit Tree and Bandit Forest obtain promising results in comparison with state-of-the-art algorithms.
View on arXiv