ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.01642
16
2

Projection-free Graph-based Classifier Learning using Gershgorin Disc Perfect Alignment

3 June 2021
Cheng Yang
Gene Cheung
Guangtao Zhai
ArXivPDFHTML
Abstract

In semi-supervised graph-based binary classifier learning, a subset of known labels x^i\hat{x}_ix^i​ are used to infer unknown labels, assuming that the label signal x\mathbf{x}x is smooth with respect to a similarity graph specified by a Laplacian matrix. When restricting labels xix_ixi​ to binary values, the problem is NP-hard. While a conventional semi-definite programming relaxation (SDR) can be solved in polynomial time using, for example, the alternating direction method of multipliers (ADMM), the complexity of projecting a candidate matrix M\mathbf{M}M onto the positive semi-definite (PSD) cone (M⪰0\mathbf{M} \succeq 0M⪰0) per iteration remains high. In this paper, leveraging a recent linear algebraic theory called Gershgorin disc perfect alignment (GDPA), we propose a fast projection-free method by solving a sequence of linear programs (LP) instead. Specifically, we first recast the SDR to its dual, where a feasible solution H⪰0\mathbf{H} \succeq 0H⪰0 is interpreted as a Laplacian matrix corresponding to a balanced signed graph minus the last node. To achieve graph balance, we split the last node into two, each retains the original positive / negative edges, resulting in a new Laplacian Hˉ\bar{\mathbf{H}}Hˉ. We repose the SDR dual for solution Hˉ\bar{\mathbf{H}}Hˉ, then replace the PSD cone constraint Hˉ⪰0\bar{\mathbf{H}} \succeq 0Hˉ⪰0 with linear constraints derived from GDPA -- sufficient conditions to ensure Hˉ\bar{\mathbf{H}}Hˉ is PSD -- so that the optimization becomes an LP per iteration. Finally, we extract predicted labels from converged solution Hˉ\bar{\mathbf{H}}Hˉ. Experiments show that our algorithm enjoyed a 28×28\times28× speedup over the next fastest scheme while achieving comparable label prediction performance.

View on arXiv
Comments on this paper