ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2107.08195
18
0

Complexity-Optimized Sparse Bayesian Learning for Scalable Classification Tasks

17 July 2021
Jiahua Luo
Chi-Man Wong
Chi-Man Vong
ArXivPDFHTML
Abstract

Sparse Bayesian Learning (SBL) constructs an extremely sparse probabilistic model with very competitive generalization. However, SBL needs to invert a big covariance matrix with complexity O(M3)O(M^3)O(M3) (M: feature size) for updating the regularization priors, making it difficult for problems with high dimensional feature space or large data size. As it may easily suffer from the memory overflow issue in such problems. This paper addresses this issue with a newly proposed diagonal Quasi-Newton (DQN) method for SBL called DQN-SBL where the inversion of big covariance matrix is ignored so that the complexity is reduced to O(M)O(M)O(M). The DQN-SBL is thoroughly evaluated for non linear and linear classifications with various benchmarks of different sizes. Experimental results verify that DQN-SBL receives competitive generalization with a very sparse model and scales well to large-scale problems.

View on arXiv
Comments on this paper