ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.13179
11
6

Learning Sparse Graph Laplacian with K Eigenvector Prior via Iterative GLASSO and Projection

25 October 2020
Saghar Bagheri
Gene Cheung
Antonio Ortega
Fen Wang
ArXivPDFHTML
Abstract

Learning a suitable graph is an important precursor to many graph signal processing (GSP) pipelines, such as graph spectral signal compression and denoising. Previous graph learning algorithms either i) make some assumptions on connectivity (e.g., graph sparsity), or ii) make simple graph edge assumptions such as positive edges only. In this paper, given an empirical covariance matrix Cˉ\bar{C}Cˉ computed from data as input, we consider a structural assumption on the graph Laplacian matrix LLL: the first KKK eigenvectors of LLL are pre-selected, e.g., based on domain-specific criteria, such as computation requirement, and the remaining eigenvectors are then learned from data. One example use case is image coding, where the first eigenvector is pre-chosen to be constant, regardless of available observed data. We first prove that the subspace of symmetric positive semi-definite (PSD) matrices Hu+H_{u}^+Hu+​ with the first KKK eigenvectors being {uk}\{u_k\}{uk​} in a defined Hilbert space is a convex cone. We then construct an operator to project a given positive definite (PD) matrix LLL to Hu+H_{u}^+Hu+​, inspired by the Gram-Schmidt procedure. Finally, we design an efficient hybrid graphical lasso/projection algorithm to compute the most suitable graph Laplacian matrix L∗∈Hu+L^* \in H_{u}^+L∗∈Hu+​ given Cˉ\bar{C}Cˉ. Experimental results show that given the first KKK eigenvectors as a prior, our algorithm outperforms competing graph learning schemes using a variety of graph comparison metrics.

View on arXiv
Comments on this paper