23
0

Fast decision tree learning solves hard coding-theoretic problems

Abstract

We connect the problem of properly PAC learning decision trees to the parameterized Nearest Codeword Problem (kk-NCP). Despite significant effort by the respective communities, algorithmic progress on both problems has been stuck: the fastest known algorithm for the former runs in quasipolynomial time (Ehrenfeucht and Haussler 1989) and the best known approximation ratio for the latter is O(n/logn)O(n/\log n) (Berman and Karpinsky 2002; Alon, Panigrahy, and Yekhanin 2009). Research on both problems has thus far proceeded independently with no known connections. We show that any\textit{any} improvement of Ehrenfeucht and Haussler's algorithm will yield O(logn)O(\log n)-approximation algorithms for kk-NCP, an exponential improvement of the current state of the art. This can be interpreted either as a new avenue for designing algorithms for kk-NCP, or as one for establishing the optimality of Ehrenfeucht and Haussler's algorithm. Furthermore, our reduction along with existing inapproximability results for kk-NCP already rule out polynomial-time algorithms for properly learning decision trees. A notable aspect of our hardness results is that they hold even in the setting of weak\textit{weak} learning whereas prior ones were limited to the setting of strong learning.

View on arXiv
Comments on this paper