ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2107.10125
  4. Cited By
A variational approximate posterior for the deep Wishart process

A variational approximate posterior for the deep Wishart process

21 July 2021
Sebastian W. Ober
Laurence Aitchison
    BDL
ArXivPDFHTML

Papers citing "A variational approximate posterior for the deep Wishart process"

8 / 8 papers shown
Title
Stochastic Kernel Regularisation Improves Generalisation in Deep Kernel
  Machines
Stochastic Kernel Regularisation Improves Generalisation in Deep Kernel Machines
Edward Milsom
Ben Anson
Laurence Aitchison
28
0
0
08 Oct 2024
Position: Bayesian Deep Learning is Needed in the Age of Large-Scale AI
Position: Bayesian Deep Learning is Needed in the Age of Large-Scale AI
Theodore Papamarkou
Maria Skoularidou
Konstantina Palla
Laurence Aitchison
Julyan Arbel
...
David Rügamer
Yee Whye Teh
Max Welling
Andrew Gordon Wilson
Ruqi Zhang
UQCV
BDL
55
27
0
01 Feb 2024
Convolutional Deep Kernel Machines
Convolutional Deep Kernel Machines
Edward Milsom
Ben Anson
Laurence Aitchison
BDL
26
5
0
18 Sep 2023
An Improved Variational Approximate Posterior for the Deep Wishart
  Process
An Improved Variational Approximate Posterior for the Deep Wishart Process
Sebastian W. Ober
Ben Anson
Edward Milsom
Laurence Aitchison
BDL
31
5
0
23 May 2023
Guided Deep Kernel Learning
Guided Deep Kernel Learning
Idan Achituve
Gal Chechik
Ethan Fetaya
BDL
31
5
0
19 Feb 2023
A theory of representation learning gives a deep generalisation of
  kernel methods
A theory of representation learning gives a deep generalisation of kernel methods
Adam X. Yang
Maxime Robeyns
Edward Milsom
Ben Anson
Nandi Schoots
Laurence Aitchison
BDL
32
10
0
30 Aug 2021
Why bigger is not always better: on finite and infinite neural networks
Why bigger is not always better: on finite and infinite neural networks
Laurence Aitchison
175
51
0
17 Oct 2019
Dropout as a Bayesian Approximation: Representing Model Uncertainty in
  Deep Learning
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Y. Gal
Zoubin Ghahramani
UQCV
BDL
287
9,156
0
06 Jun 2015
1