ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1903.03571
19
151

Rates of Convergence for Sparse Variational Gaussian Process Regression

8 March 2019
David R. Burt
C. Rasmussen
Mark van der Wilk
ArXivPDFHTML
Abstract

Excellent variational approximations to Gaussian process posteriors have been developed which avoid the O(N3)\mathcal{O}\left(N^3\right)O(N3) scaling with dataset size NNN. They reduce the computational cost to O(NM2)\mathcal{O}\left(NM^2\right)O(NM2), with M≪NM\ll NM≪N being the number of inducing variables, which summarise the process. While the computational cost seems to be linear in NNN, the true complexity of the algorithm depends on how MMM must increase to ensure a certain quality of approximation. We address this by characterising the behavior of an upper bound on the KL divergence to the posterior. We show that with high probability the KL divergence can be made arbitrarily small by growing MMM more slowly than NNN. A particular case of interest is that for regression with normally distributed inputs in D-dimensions with the popular Squared Exponential kernel, M=O(log⁡DN)M=\mathcal{O}(\log^D N)M=O(logDN) is sufficient. Our results show that as datasets grow, Gaussian process posteriors can truly be approximated cheaply, and provide a concrete rule for how to increase MMM in continual learning scenarios.

View on arXiv
Comments on this paper