ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1407.0581
41
23

Support Consistency of Direct Sparse-Change Learning in Markov Networks

2 July 2014
Song Liu
Taiji Suzuki
Raissa Relator
Jun Sese
Masashi Sugiyama
Kenji Fukumizu
ArXivPDFHTML
Abstract

We study the problem of learning sparse structure changes between two Markov networks PPP and QQQ. Rather than fitting two Markov networks separately to two sets of data and figuring out their differences, a recent work proposed to learn changes \emph{directly} via estimating the ratio between two Markov network models. In this paper, we give sufficient conditions for \emph{successful change detection} with respect to the sample size np,nqn_p, n_qnp​,nq​, the dimension of data mmm, and the number of changed edges ddd. When using an unbounded density ratio model we prove that the true sparse changes can be consistently identified for np=Ω(d2log⁡m2+m2)n_p = \Omega(d^2 \log \frac{m^2+m}{2})np​=Ω(d2log2m2+m​) and nq=Ω(np2)n_q = \Omega({n_p^2})nq​=Ω(np2​), with an exponentially decaying upper-bound on learning error. Such sample complexity can be improved to min⁡(np,nq)=Ω(d2log⁡m2+m2)\min(n_p, n_q) = \Omega(d^2 \log \frac{m^2+m}{2})min(np​,nq​)=Ω(d2log2m2+m​) when the boundedness of the density ratio model is assumed. Our theoretical guarantee can be applied to a wide range of discrete/continuous Markov networks.

View on arXiv
Comments on this paper