ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2107.00819
15
7

Decision tree heuristics can fail, even in the smoothed setting

2 July 2021
Guy Blanc
Jane Lange
Mingda Qiao
Li-Yang Tan
ArXivPDFHTML
Abstract

Greedy decision tree learning heuristics are mainstays of machine learning practice, but theoretical justification for their empirical success remains elusive. In fact, it has long been known that there are simple target functions for which they fail badly (Kearns and Mansour, STOC 1996). Recent work of Brutzkus, Daniely, and Malach (COLT 2020) considered the smoothed analysis model as a possible avenue towards resolving this disconnect. Within the smoothed setting and for targets fff that are kkk-juntas, they showed that these heuristics successfully learn fff with depth-kkk decision tree hypotheses. They conjectured that the same guarantee holds more generally for targets that are depth-kkk decision trees. We provide a counterexample to this conjecture: we construct targets that are depth-kkk decision trees and show that even in the smoothed setting, these heuristics build trees of depth 2Ω(k)2^{\Omega(k)}2Ω(k) before achieving high accuracy. We also show that the guarantees of Brutzkus et al. cannot extend to the agnostic setting: there are targets that are very close to kkk-juntas, for which these heuristics build trees of depth 2Ω(k)2^{\Omega(k)}2Ω(k) before achieving high accuracy.

View on arXiv
Comments on this paper