ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2102.07542
17
16

High-Dimensional Gaussian Process Inference with Derivatives

15 February 2021
Filip de Roos
A. Gessner
Philipp Hennig
    GP
ArXivPDFHTML
Abstract

Although it is widely known that Gaussian processes can be conditioned on observations of the gradient, this functionality is of limited use due to the prohibitive computational cost of O(N3D3)\mathcal{O}(N^3 D^3)O(N3D3) in data points NNN and dimension DDD. The dilemma of gradient observations is that a single one of them comes at the same cost as DDD independent function evaluations, so the latter are often preferred. Careful scrutiny reveals, however, that derivative observations give rise to highly structured kernel Gram matrices for very general classes of kernels (inter alia, stationary kernels). We show that in the low-data regime N<DN<DN<D, the Gram matrix can be decomposed in a manner that reduces the cost of inference to O(N2D+(N2)3)\mathcal{O}(N^2D + (N^2)^3)O(N2D+(N2)3) (i.e., linear in the number of dimensions) and, in special cases, to O(N2D+N3)\mathcal{O}(N^2D + N^3)O(N2D+N3). This reduction in complexity opens up new use-cases for inference with gradients especially in the high-dimensional regime, where the information-to-cost ratio of gradient observations significantly increases. We demonstrate this potential in a variety of tasks relevant for machine learning, such as optimization and Hamiltonian Monte Carlo with predictive gradients.

View on arXiv
Comments on this paper