ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.08187
  4. Cited By
Deep neural networks with dependent weights: Gaussian Process mixture
  limit, heavy tails, sparsity and compressibility

Deep neural networks with dependent weights: Gaussian Process mixture limit, heavy tails, sparsity and compressibility

17 May 2022
Hoileong Lee
Fadhel Ayed
Paul Jung
Juho Lee
Hongseok Yang
François Caron
ArXivPDFHTML

Papers citing "Deep neural networks with dependent weights: Gaussian Process mixture limit, heavy tails, sparsity and compressibility"

9 / 9 papers shown
Title
Deep Kernel Posterior Learning under Infinite Variance Prior Weights
Deep Kernel Posterior Learning under Infinite Variance Prior Weights
Jorge Loría
A. Bhadra
BDL
UQCV
61
0
0
02 Oct 2024
Wide stable neural networks: Sample regularity, functional convergence
  and Bayesian inverse problems
Wide stable neural networks: Sample regularity, functional convergence and Bayesian inverse problems
Tomás Soto
32
0
0
04 Jul 2024
Gaussian random field approximation via Stein's method with applications
  to wide random neural networks
Gaussian random field approximation via Stein's method with applications to wide random neural networks
Krishnakumar Balasubramanian
L. Goldstein
Nathan Ross
Adil Salim
30
8
0
28 Jun 2023
Implicit Compressibility of Overparametrized Neural Networks Trained
  with Heavy-Tailed SGD
Implicit Compressibility of Overparametrized Neural Networks Trained with Heavy-Tailed SGD
Yijun Wan
Melih Barsbey
A. Zaidi
Umut Simsekli
30
1
0
13 Jun 2023
Posterior Inference on Shallow Infinitely Wide Bayesian Neural Networks
  under Weights with Unbounded Variance
Posterior Inference on Shallow Infinitely Wide Bayesian Neural Networks under Weights with Unbounded Variance
Jorge Loría
A. Bhadra
UQCV
BDL
26
1
0
18 May 2023
Infinitely wide limits for deep Stable neural networks: sub-linear,
  linear and super-linear activation functions
Infinitely wide limits for deep Stable neural networks: sub-linear, linear and super-linear activation functions
Alberto Bordino
Stefano Favaro
S. Fortini
30
7
0
08 Apr 2023
Over-parameterised Shallow Neural Networks with Asymmetrical Node Scaling: Global Convergence Guarantees and Feature Learning
Over-parameterised Shallow Neural Networks with Asymmetrical Node Scaling: Global Convergence Guarantees and Feature Learning
François Caron
Fadhel Ayed
Paul Jung
Hoileong Lee
Juho Lee
Hongseok Yang
62
2
0
02 Feb 2023
Large-width asymptotics for ReLU neural networks with $α$-Stable
  initializations
Large-width asymptotics for ReLU neural networks with ααα-Stable initializations
Stefano Favaro
S. Fortini
Stefano Peluchetti
20
2
0
16 Jun 2022
Why bigger is not always better: on finite and infinite neural networks
Why bigger is not always better: on finite and infinite neural networks
Laurence Aitchison
175
51
0
17 Oct 2019
1