214
50

Learning Loopy Graphical Models with Latent Variables: Efficient Methods and Guarantees

Abstract

The problem of structure estimation in latent graphical models is considered, where some nodes are latent or hidden. A novel method is proposed which attempts to locally reconstruct latent trees and outputs a loopy graph structure with hidden variables. Correctness of the method is established when the underlying graph has a large girth and the model is in the regime of correlation decay, and PAC guarantees for the method are also derived. For the special case of the Ising model, the number of samples nn required for structural consistency scales as n=Ω(θmin2δη(η+1)2logp)n = \Omega(\theta_{\min}^{-2\delta \eta(\eta+1)-2}\log p), where θmin\theta_{\min} is the minimum edge potential, δ\delta is the depth (i.e., distance from a hidden node to the nearest observed nodes), and η\eta is a parameter which depends on the bounds on node and edge potentials in the Ising model. The results are further specialized for the case when the observed nodes are uniformly sampled from the model. Finally, necessary conditions for structural consistency under any algorithm are derived.

View on arXiv
Comments on this paper