ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1812.04700
70
12
v1v2v3v4 (latest)

Predictive Learning on Hidden Tree-Structured Ising Models

11 December 2018
Konstantinos E. Nikolakakis
Dionysios S. Kalogerias
Anand D. Sarwate
ArXiv (abs)PDFHTML
Abstract

We provide high-probability sample complexity guarantees for exact structure recovery and accurate predictive learning using noise-corrupted samples from an acyclic (tree-shaped) graphical model. The hidden variables follow a tree-structured Ising model distribution, whereas the observable variables are generated by a binary symmetric channel taking the hidden variables as its input (flipping each bit independently with some constant probability q∈[0,1/2)q\in [0,1/2)q∈[0,1/2)). This simple model arises naturally in a variety of applications, such as in physics, biology, computer science, and finance. In the absence of noise, the structure learning problem was recently studied by Bresler and Karzand (2018); this paper quantifies how noise in the hidden model impacts the sample complexity of structure learning and marginal distributions' estimation by proving upper and lower bounds on the sample complexity. Our results generalize state-of-the-art bounds reported in prior work, and they exactly recover the noiseless case (q=0q=0q=0). As expected, for any tree with ppp vertices and probability of incorrect recovery δ>0\delta>0δ>0, the sufficient number of samples remains logarithmic as in the noiseless case, i.e., O(log⁡(p/δ))\mathcal{O}(\log(p/\delta))O(log(p/δ)), while the dependence on qqq is O(1/(1−2q)4)\mathcal{O}\big( 1/(1-2q)^{4} \big)O(1/(1−2q)4) for both aforementioned tasks. We also present a new equivalent of Isserlis's Theorem for sign-valued tree-structured distributions, yielding a new low-complexity algorithm for higher-order moment estimation.

View on arXiv
Comments on this paper