ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1803.06521
17
36

Beyond the Low-Degree Algorithm: Mixtures of Subcubes and Their Applications

17 March 2018
Sitan Chen
Ankur Moitra
ArXivPDFHTML
Abstract

We introduce the problem of learning mixtures of kkk subcubes over {0,1}n\{0,1\}^n{0,1}n, which contains many classic learning theory problems as a special case (and is itself a special case of others). We give a surprising nO(log⁡k)n^{O(\log k)}nO(logk)-time learning algorithm based on higher-order multilinear moments. It is not possible to learn the parameters because the same distribution can be represented by quite different models. Instead, we develop a framework for reasoning about how multilinear moments can pinpoint essential features of the mixture, like the number of components. We also give applications of our algorithm to learning decision trees with stochastic transitions (which also capture interesting scenarios where the transitions are deterministic but there are latent variables). Using our algorithm for learning mixtures of subcubes, we can approximate the Bayes optimal classifier within additive error ϵ\epsilonϵ on kkk-leaf decision trees with at most sss stochastic transitions on any root-to-leaf path in nO(s+log⁡k)⋅poly(1/ϵ)n^{O(s + \log k)}\cdot\text{poly}(1/\epsilon)nO(s+logk)⋅poly(1/ϵ) time. In this stochastic setting, the classic Occam algorithms for learning decision trees with zero stochastic transitions break down, while the low-degree algorithm of Linial et al. inherently has a quasipolynomial dependence on 1/ϵ1/\epsilon1/ϵ. In contrast, as we will show, mixtures of kkk subcubes are uniquely determined by their degree 2log⁡k2 \log k2logk moments and hence provide a useful abstraction for simultaneously achieving the polynomial dependence on 1/ϵ1/\epsilon1/ϵ of the classic Occam algorithms for decision trees and the flexibility of the low-degree algorithm in being able to accommodate stochastic transitions. Using our multilinear moment techniques, we also give the first improved upper and lower bounds since the work of Feldman et al. for the related but harder problem of learning mixtures of binary product distributions.

View on arXiv
Comments on this paper