ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.08140
  4. Cited By
Global inducing point variational posteriors for Bayesian neural
  networks and deep Gaussian processes

Global inducing point variational posteriors for Bayesian neural networks and deep Gaussian processes

17 May 2020
Sebastian W. Ober
Laurence Aitchison
    BDL
ArXivPDFHTML

Papers citing "Global inducing point variational posteriors for Bayesian neural networks and deep Gaussian processes"

14 / 14 papers shown
Title
Sparse Gaussian Neural Processes
Sparse Gaussian Neural Processes
Tommy Rochussen
Vincent Fortuin
BDL
UQCV
58
0
0
02 Apr 2025
Are you using test log-likelihood correctly?
Are you using test log-likelihood correctly?
Sameer K. Deshpande
Soumya K. Ghosh
Tin D. Nguyen
Tamara Broderick
27
7
0
01 Dec 2022
Adapting the Linearised Laplace Model Evidence for Modern Deep Learning
Adapting the Linearised Laplace Model Evidence for Modern Deep Learning
Javier Antorán
David Janz
J. Allingham
Erik A. Daxberger
Riccardo Barbano
Eric T. Nalisnick
José Miguel Hernández-Lobato
UQCV
BDL
27
28
0
17 Jun 2022
Wide Bayesian neural networks have a simple weight posterior: theory and
  accelerated sampling
Wide Bayesian neural networks have a simple weight posterior: theory and accelerated sampling
Jiri Hron
Roman Novak
Jeffrey Pennington
Jascha Narain Sohl-Dickstein
UQCV
BDL
42
6
0
15 Jun 2022
Deep neural networks with dependent weights: Gaussian Process mixture
  limit, heavy tails, sparsity and compressibility
Deep neural networks with dependent weights: Gaussian Process mixture limit, heavy tails, sparsity and compressibility
Hoileong Lee
Fadhel Ayed
Paul Jung
Juho Lee
Hongseok Yang
François Caron
40
10
0
17 May 2022
Invariance Learning in Deep Neural Networks with Differentiable Laplace
  Approximations
Invariance Learning in Deep Neural Networks with Differentiable Laplace Approximations
Alexander Immer
Tycho F. A. van der Ouderaa
Gunnar Rätsch
Vincent Fortuin
Mark van der Wilk
BDL
31
44
0
22 Feb 2022
Gradient Descent on Neurons and its Link to Approximate Second-Order
  Optimization
Gradient Descent on Neurons and its Link to Approximate Second-Order Optimization
Frederik Benzing
ODL
37
23
0
28 Jan 2022
Conditional Deep Gaussian Processes: empirical Bayes hyperdata learning
Conditional Deep Gaussian Processes: empirical Bayes hyperdata learning
Chi-Ken Lu
Patrick Shafto
BDL
19
4
0
01 Oct 2021
The Limitations of Large Width in Neural Networks: A Deep Gaussian
  Process Perspective
The Limitations of Large Width in Neural Networks: A Deep Gaussian Process Perspective
Geoff Pleiss
John P. Cunningham
26
24
0
11 Jun 2021
Data augmentation in Bayesian neural networks and the cold posterior
  effect
Data augmentation in Bayesian neural networks and the cold posterior effect
Seth Nabarro
Stoil Ganev
Adrià Garriga-Alonso
Vincent Fortuin
Mark van der Wilk
Laurence Aitchison
BDL
21
37
0
10 Jun 2021
Priors in Bayesian Deep Learning: A Review
Priors in Bayesian Deep Learning: A Review
Vincent Fortuin
UQCV
BDL
29
124
0
14 May 2021
The Promises and Pitfalls of Deep Kernel Learning
The Promises and Pitfalls of Deep Kernel Learning
Sebastian W. Ober
C. Rasmussen
Mark van der Wilk
UQCV
BDL
21
107
0
24 Feb 2021
Why bigger is not always better: on finite and infinite neural networks
Why bigger is not always better: on finite and infinite neural networks
Laurence Aitchison
175
51
0
17 Oct 2019
Dropout as a Bayesian Approximation: Representing Model Uncertainty in
  Deep Learning
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Y. Gal
Zoubin Ghahramani
UQCV
BDL
285
9,136
0
06 Jun 2015
1