ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.04005
25
18

Improved Convergence Rates for Sparse Approximation Methods in Kernel-Based Learning

8 February 2022
Sattar Vakili
Jonathan Scarlett
Da-shan Shiu
A. Bernacchia
ArXivPDFHTML
Abstract

Kernel-based models such as kernel ridge regression and Gaussian processes are ubiquitous in machine learning applications for regression and optimization. It is well known that a major downside for kernel-based models is the high computational cost; given a dataset of nnn samples, the cost grows as O(n3)\mathcal{O}(n^3)O(n3). Existing sparse approximation methods can yield a significant reduction in the computational cost, effectively reducing the actual cost down to as low as O(n)\mathcal{O}(n)O(n) in certain cases. Despite this remarkable empirical success, significant gaps remain in the existing results for the analytical bounds on the error due to approximation. In this work, we provide novel confidence intervals for the Nystr\"om method and the sparse variational Gaussian process approximation method, which we establish using novel interpretations of the approximate (surrogate) posterior variance of the models. Our confidence intervals lead to improved performance bounds in both regression and optimization problems.

View on arXiv
Comments on this paper