Structure Learning of -colorings

We study the structure learning problem for -colorings, an important class of Markov random fields that capture key combinatorial structures on graphs, including proper colorings and independent sets, as well as spin systems from statistical physics. The learning problem is as follows: for a fixed (and known) constraint graph with colors and an unknown graph with vertices, given uniformly random -colorings of , how many samples are required to learn the edges of the unknown graph ? We give a characterization of for which the problem is identifiable for every , i.e., we can learn with an infinite number of samples. We also show that there are identifiable constraint graphs for which one cannot hope to learn every graph efficiently. We focus particular attention on the case of proper vertex -colorings of graphs of maximum degree where intriguing connections to statistical physics phase transitions appear. We prove that in the tree uniqueness region (when ) the problem is identifiable and we can learn in time. In contrast for soft-constraint systems, such as the Ising model, the best possible running time is exponential in . In the tree non-uniqueness region (when ) we prove that the problem is not identifiable and thus cannot be learned. Moreover, when we prove that even learning an equivalent graph (any graph with the same set of -colorings) is computationally hard---sample complexity is exponential in in the worst case. We further explore the connection between the efficiency/hardness of the structure learning problem and the uniqueness/non-uniqueness phase transition for general -colorings and prove that under the well-known Dobrushin uniqueness condition, we can learn in time.
View on arXiv