Learning stochastic decision trees

Abstract
We give a quasipolynomial-time algorithm for learning stochastic decision trees that is optimally resilient to adversarial noise. Given an -corrupted set of uniform random samples labeled by a size- stochastic decision tree, our algorithm runs in time and returns a hypothesis with error within an additive of the Bayes optimal. An additive is the information-theoretic minimum. Previously no non-trivial algorithm with a guarantee of was known, even for weaker noise models. Our algorithm is furthermore proper, returning a hypothesis that is itself a decision tree; previously no such algorithm was known even in the noiseless setting.
View on arXivComments on this paper