ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1504.06952
219
48
v1v2v3v4v5v6v7v8v9v10v11v12v13v14v15v16v17v18v19v20v21 (latest)

Decision Tree Algorithms for the Contextual Bandit Problem

27 April 2015
Raphael Feraud
Robin Allesiardo
Tanguy Urvoy
Fabrice Clérot
ArXiv (abs)PDFHTML
Abstract

To address the contextual bandit problem, we propose online decision tree algorithms. The analysis of proposed algorithms is based on the sample complexity needed to find the optimal decision stump. Then, the decision stumps are assembled in a decision tree, KMD-Tree, and in a random collection of decision trees, KMD-Forest. We show that the proposed algorithms are optimal up to a logarithmic factor. The dependence of the sample complexity upon the number of contextual variables is logarithmic. The computational cost of the proposed algorithm with respect to the time horizon is linear. These analytical results allow the proposed algorithms to be efficient in real applications, where the number of events to process is huge, and where we expect that some contextual variables, chosen from a large set, have potentially non-linear dependencies with the rewards. When KMD-Trees are assembled into a KMD-Forest, the analysis is done against a strong reference, the Random Forest built knowing the joint distribution of contexts and rewards. We show that the expected dependent regret bound against this strong reference is logarithmic with respect to the time horizon. In the experiments done to illustrate the theoretical analysis, KMD-Tree and KMD-Forest obtain promising results in comparison with state-of-the-art algorithms.

View on arXiv
Comments on this paper