ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2208.09539
25
1

Meta Learning for High-dimensional Ising Model Selection Using ℓ1\ell_1ℓ1​-regularized Logistic Regression

19 August 2022
Huiming Xie
Jean Honorio
ArXivPDFHTML
Abstract

In this paper, we consider the meta learning problem for estimating the graphs associated with high-dimensional Ising models, using the method of ℓ1\ell_1ℓ1​-regularized logistic regression for neighborhood selection of each node. Our goal is to use the information learned from the auxiliary tasks in the learning of the novel task to reduce its sufficient sample complexity. To this end, we propose a novel generative model as well as an improper estimation method. In our setting, all the tasks are \emph{similar} in their \emph{random} model parameters and supports. By pooling all the samples from the auxiliary tasks to \emph{improperly} estimate a single parameter vector, we can recover the true support union, assumed small in size, with a high probability with a sufficient sample complexity of Ω(1)\Omega(1) Ω(1) per task, for K=Ω(d3log⁡p)K = \Omega(d^3 \log p ) K=Ω(d3logp) tasks of Ising models with ppp nodes and a maximum neighborhood size ddd. Then, with the support for the novel task restricted to the estimated support union, we prove that consistent neighborhood selection for the novel task can be obtained with a reduced sufficient sample complexity of Ω(d3log⁡d)\Omega(d^3 \log d)Ω(d3logd).

View on arXiv
Comments on this paper