ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.07764
  4. Cited By
On the inability of Gaussian process regression to optimally learn
  compositional functions

On the inability of Gaussian process regression to optimally learn compositional functions

16 May 2022
M. Giordano
Kolyan Ray
Johannes Schmidt-Hieber
ArXivPDFHTML

Papers citing "On the inability of Gaussian process regression to optimally learn compositional functions"

11 / 11 papers shown
Title
Support Collapse of Deep Gaussian Processes with Polynomial Kernels for a Wide Regime of Hyperparameters
Support Collapse of Deep Gaussian Processes with Polynomial Kernels for a Wide Regime of Hyperparameters
Daryna Chernobrovkina
Steffen Grünewälder
51
0
0
15 Mar 2025
On strong posterior contraction rates for Besov-Laplace priors in the
  white noise model
On strong posterior contraction rates for Besov-Laplace priors in the white noise model
Emanuele Dolera
Stefano Favaro
Matteo Giordano
26
0
0
11 Nov 2024
How DNNs break the Curse of Dimensionality: Compositionality and Symmetry Learning
How DNNs break the Curse of Dimensionality: Compositionality and Symmetry Learning
Arthur Jacot
Seok Hoan Choi
Yuxiao Wen
AI4CE
91
2
0
08 Jul 2024
Deep Horseshoe Gaussian Processes
Deep Horseshoe Gaussian Processes
Ismael Castillo
Thibault Randrianarisoa
BDL
UQCV
42
5
0
04 Mar 2024
Deep Gaussian Process Priors for Bayesian Inference in Nonlinear Inverse
  Problems
Deep Gaussian Process Priors for Bayesian Inference in Nonlinear Inverse Problems
Kweku Abraham
Neil Deo
25
5
0
21 Dec 2023
Deep Latent Force Models: ODE-based Process Convolutions for Bayesian
  Deep Learning
Deep Latent Force Models: ODE-based Process Convolutions for Bayesian Deep Learning
Thomas Baldwin-McDonald
Mauricio A. Álvarez
39
1
0
24 Nov 2023
Besov-Laplace priors in density estimation: optimal posterior
  contraction rates and adaptation
Besov-Laplace priors in density estimation: optimal posterior contraction rates and adaptation
M. Giordano
21
4
0
30 Aug 2022
What Can Be Learnt With Wide Convolutional Neural Networks?
What Can Be Learnt With Wide Convolutional Neural Networks?
Francesco Cagnetta
Alessandro Favero
M. Wyart
MLT
38
11
0
01 Aug 2022
Posterior contraction rates for constrained deep Gaussian processes in
  density estimation and classication
Posterior contraction rates for constrained deep Gaussian processes in density estimation and classication
F. Bachoc
A. Lagnoux
21
4
0
14 Dec 2021
Posterior contraction for deep Gaussian process priors
Posterior contraction for deep Gaussian process priors
G. Finocchio
Johannes Schmidt-Hieber
35
11
0
16 May 2021
Benefits of depth in neural networks
Benefits of depth in neural networks
Matus Telgarsky
148
602
0
14 Feb 2016
1