ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1804.11271
  4. Cited By
Gaussian Process Behaviour in Wide Deep Neural Networks

Gaussian Process Behaviour in Wide Deep Neural Networks

30 April 2018
A. G. Matthews
Mark Rowland
Jiri Hron
Richard Turner
Zoubin Ghahramani
    BDL
ArXivPDFHTML

Papers citing "Gaussian Process Behaviour in Wide Deep Neural Networks"

50 / 391 papers shown
Title
M22: A Communication-Efficient Algorithm for Federated Learning Inspired
  by Rate-Distortion
M22: A Communication-Efficient Algorithm for Federated Learning Inspired by Rate-Distortion
Yangyi Liu
Stefano Rini
Sadaf Salehkalaibar
Jun Chen
FedML
21
4
0
23 Jan 2023
Mining Explainable Predictive Features for Water Quality Management
Mining Explainable Predictive Features for Water Quality Management
C. Muldoon
Levent Gorgu
J. O'Sullivan
W. Meijer
Gregory M. P. O'Hare
FAtt
11
0
0
08 Dec 2022
Statistical Physics of Deep Neural Networks: Initialization toward
  Optimal Channels
Statistical Physics of Deep Neural Networks: Initialization toward Optimal Channels
Kangyu Weng
Aohua Cheng
Ziyang Zhang
Pei Sun
Yang Tian
53
2
0
04 Dec 2022
An Empirical Analysis of the Advantages of Finite- v.s. Infinite-Width
  Bayesian Neural Networks
An Empirical Analysis of the Advantages of Finite- v.s. Infinite-Width Bayesian Neural Networks
Jiayu Yao
Yaniv Yacoby
Beau Coker
Weiwei Pan
Finale Doshi-Velez
24
1
0
16 Nov 2022
Characterizing the Spectrum of the NTK via a Power Series Expansion
Characterizing the Spectrum of the NTK via a Power Series Expansion
Michael Murray
Hui Jin
Benjamin Bowman
Guido Montúfar
38
11
0
15 Nov 2022
Overparameterized random feature regression with nearly orthogonal data
Overparameterized random feature regression with nearly orthogonal data
Zhichao Wang
Yizhe Zhu
29
3
0
11 Nov 2022
A Bayesian Semiparametric Method For Estimating Causal Quantile Effects
A Bayesian Semiparametric Method For Estimating Causal Quantile Effects
Steven G. Xu
Shu Yang
Brian J. Reich
CML
13
1
0
03 Nov 2022
Globally Gated Deep Linear Networks
Globally Gated Deep Linear Networks
Qianyi Li
H. Sompolinsky
AI4CE
27
10
0
31 Oct 2022
A Solvable Model of Neural Scaling Laws
A Solvable Model of Neural Scaling Laws
A. Maloney
Daniel A. Roberts
J. Sully
47
51
0
30 Oct 2022
Proximal Mean Field Learning in Shallow Neural Networks
Proximal Mean Field Learning in Shallow Neural Networks
Alexis M. H. Teter
Iman Nodozi
A. Halder
FedML
43
1
0
25 Oct 2022
Accelerating the training of single-layer binary neural networks using
  the HHL quantum algorithm
Accelerating the training of single-layer binary neural networks using the HHL quantum algorithm
S. L. Alarcón
Cory E. Merkel
Martin Hoffnagle
Sabrina Ly
Alejandro Pozas-Kerstjens
24
5
0
23 Oct 2022
Efficient Dataset Distillation Using Random Feature Approximation
Efficient Dataset Distillation Using Random Feature Approximation
Noel Loo
Ramin Hasani
Alexander Amini
Daniela Rus
DD
81
97
0
21 Oct 2022
Evolution of Neural Tangent Kernels under Benign and Adversarial
  Training
Evolution of Neural Tangent Kernels under Benign and Adversarial Training
Noel Loo
Ramin Hasani
Alexander Amini
Daniela Rus
AAML
36
13
0
21 Oct 2022
Meta-Principled Family of Hyperparameter Scaling Strategies
Meta-Principled Family of Hyperparameter Scaling Strategies
Sho Yaida
58
16
0
10 Oct 2022
The Influence of Learning Rule on Representation Dynamics in Wide Neural
  Networks
The Influence of Learning Rule on Representation Dynamics in Wide Neural Networks
Blake Bordelon
Cengiz Pehlevan
41
22
0
05 Oct 2022
On the infinite-depth limit of finite-width neural networks
On the infinite-depth limit of finite-width neural networks
Soufiane Hayou
32
22
0
03 Oct 2022
Batch Bayesian optimisation via density-ratio estimation with guarantees
Batch Bayesian optimisation via density-ratio estimation with guarantees
Rafael Oliveira
Louis C. Tiao
Fabio Ramos
44
7
0
22 Sep 2022
Variational Inference for Infinitely Deep Neural Networks
Variational Inference for Infinitely Deep Neural Networks
Achille Nazaret
David M. Blei
BDL
25
11
0
21 Sep 2022
Extrapolation and Spectral Bias of Neural Nets with Hadamard Product: a
  Polynomial Net Study
Extrapolation and Spectral Bias of Neural Nets with Hadamard Product: a Polynomial Net Study
Yongtao Wu
Zhenyu Zhu
Fanghui Liu
Grigorios G. Chrysos
V. Cevher
28
10
0
16 Sep 2022
Fast Neural Kernel Embeddings for General Activations
Fast Neural Kernel Embeddings for General Activations
Insu Han
A. Zandieh
Jaehoon Lee
Roman Novak
Lechao Xiao
Amin Karbasi
65
18
0
09 Sep 2022
On Kernel Regression with Data-Dependent Kernels
On Kernel Regression with Data-Dependent Kernels
James B. Simon
BDL
29
3
0
04 Sep 2022
Gaussian Process Surrogate Models for Neural Networks
Gaussian Process Surrogate Models for Neural Networks
Michael Y. Li
Erin Grant
Thomas Griffiths
BDL
SyDa
38
7
0
11 Aug 2022
Deep Maxout Network Gaussian Process
Deep Maxout Network Gaussian Process
Libin Liang
Ye Tian
Ge Cheng
BDL
14
0
0
08 Aug 2022
Single Model Uncertainty Estimation via Stochastic Data Centering
Single Model Uncertainty Estimation via Stochastic Data Centering
Jayaraman J. Thiagarajan
Rushil Anirudh
V. Narayanaswamy
P. Bremer
UQCV
OOD
32
26
0
14 Jul 2022
On the Robustness of Bayesian Neural Networks to Adversarial Attacks
On the Robustness of Bayesian Neural Networks to Adversarial Attacks
Luca Bortolussi
Ginevra Carbone
Luca Laurenti
A. Patané
G. Sanguinetti
Matthew Wicker
AAML
24
11
0
13 Jul 2022
Synergy and Symmetry in Deep Learning: Interactions between the Data,
  Model, and Inference Algorithm
Synergy and Symmetry in Deep Learning: Interactions between the Data, Model, and Inference Algorithm
Lechao Xiao
Jeffrey Pennington
42
10
0
11 Jul 2022
Memory Safe Computations with XLA Compiler
Memory Safe Computations with XLA Compiler
A. Artemev
Tilman Roeder
Mark van der Wilk
29
8
0
28 Jun 2022
AutoInit: Automatic Initialization via Jacobian Tuning
AutoInit: Automatic Initialization via Jacobian Tuning
Tianyu He
Darshil Doshi
Andrey Gromov
19
4
0
27 Jun 2022
Making Look-Ahead Active Learning Strategies Feasible with Neural
  Tangent Kernels
Making Look-Ahead Active Learning Strategies Feasible with Neural Tangent Kernels
Mohamad Amin Mohamadi
Wonho Bae
Danica J. Sutherland
30
20
0
25 Jun 2022
A Fast, Well-Founded Approximation to the Empirical Neural Tangent
  Kernel
A Fast, Well-Founded Approximation to the Empirical Neural Tangent Kernel
Mohamad Amin Mohamadi
Wonho Bae
Danica J. Sutherland
AAML
29
27
0
25 Jun 2022
Fast Finite Width Neural Tangent Kernel
Fast Finite Width Neural Tangent Kernel
Roman Novak
Jascha Narain Sohl-Dickstein
S. Schoenholz
AAML
28
54
0
17 Jun 2022
Large-width asymptotics for ReLU neural networks with $α$-Stable
  initializations
Large-width asymptotics for ReLU neural networks with ααα-Stable initializations
Stefano Favaro
S. Fortini
Stefano Peluchetti
20
2
0
16 Jun 2022
Wide Bayesian neural networks have a simple weight posterior: theory and
  accelerated sampling
Wide Bayesian neural networks have a simple weight posterior: theory and accelerated sampling
Jiri Hron
Roman Novak
Jeffrey Pennington
Jascha Narain Sohl-Dickstein
UQCV
BDL
48
6
0
15 Jun 2022
Wavelet Regularization Benefits Adversarial Training
Wavelet Regularization Benefits Adversarial Training
Jun Yan
Huilin Yin
Xiaoyang Deng
Zi-qin Zhao
Wancheng Ge
Hao Zhang
Gerhard Rigoll
AAML
19
2
0
08 Jun 2022
Asymptotic Properties for Bayesian Neural Network in Besov Space
Asymptotic Properties for Bayesian Neural Network in Besov Space
Kyeongwon Lee
Jaeyong Lee
BDL
19
4
0
01 Jun 2022
Optimal Activation Functions for the Random Features Regression Model
Optimal Activation Functions for the Random Features Regression Model
Jianxin Wang
José Bento
37
3
0
31 May 2022
Why So Pessimistic? Estimating Uncertainties for Offline RL through
  Ensembles, and Why Their Independence Matters
Why So Pessimistic? Estimating Uncertainties for Offline RL through Ensembles, and Why Their Independence Matters
Seyed Kamyar Seyed Ghasemipour
S. Gu
Ofir Nachum
OffRL
31
69
0
27 May 2022
On Bridging the Gap between Mean Field and Finite Width in Deep Random
  Neural Networks with Batch Normalization
On Bridging the Gap between Mean Field and Finite Width in Deep Random Neural Networks with Batch Normalization
Amir Joudaki
Hadi Daneshmand
Francis R. Bach
AI4CE
21
2
0
25 May 2022
Gaussian Pre-Activations in Neural Networks: Myth or Reality?
Gaussian Pre-Activations in Neural Networks: Myth or Reality?
Pierre Wolinski
Julyan Arbel
AI4CE
76
8
0
24 May 2022
Self-Consistent Dynamical Field Theory of Kernel Evolution in Wide
  Neural Networks
Self-Consistent Dynamical Field Theory of Kernel Evolution in Wide Neural Networks
Blake Bordelon
Cengiz Pehlevan
MLT
40
77
0
19 May 2022
Deep neural networks with dependent weights: Gaussian Process mixture
  limit, heavy tails, sparsity and compressibility
Deep neural networks with dependent weights: Gaussian Process mixture limit, heavy tails, sparsity and compressibility
Hoileong Lee
Fadhel Ayed
Paul Jung
Juho Lee
Hongseok Yang
François Caron
46
10
0
17 May 2022
Incorporating Prior Knowledge into Neural Networks through an Implicit
  Composite Kernel
Incorporating Prior Knowledge into Neural Networks through an Implicit Composite Kernel
Ziyang Jiang
Tongshu Zheng
Yiling Liu
David Carlson
30
4
0
15 May 2022
Generalized Variational Inference in Function Spaces: Gaussian Measures
  meet Bayesian Deep Learning
Generalized Variational Inference in Function Spaces: Gaussian Measures meet Bayesian Deep Learning
Veit Wild
Robert Hu
Dino Sejdinovic
BDL
51
11
0
12 May 2022
Investigating Generalization by Controlling Normalized Margin
Investigating Generalization by Controlling Normalized Margin
Alexander R. Farhang
Jeremy Bernstein
Kushal Tirumala
Yang Liu
Yisong Yue
31
6
0
08 May 2022
NeuralEF: Deconstructing Kernels by Deep Neural Networks
NeuralEF: Deconstructing Kernels by Deep Neural Networks
Zhijie Deng
Jiaxin Shi
Jun Zhu
27
18
0
30 Apr 2022
Convergence of neural networks to Gaussian mixture distribution
Convergence of neural networks to Gaussian mixture distribution
Yasuhiko Asao
Ryotaro Sakamoto
S. Takagi
BDL
35
2
0
26 Apr 2022
Polynomial-time Sparse Measure Recovery: From Mean Field Theory to
  Algorithm Design
Polynomial-time Sparse Measure Recovery: From Mean Field Theory to Algorithm Design
Hadi Daneshmand
Francis R. Bach
15
1
0
16 Apr 2022
Towards a Unified Framework for Uncertainty-aware Nonlinear Variable
  Selection with Theoretical Guarantees
Towards a Unified Framework for Uncertainty-aware Nonlinear Variable Selection with Theoretical Guarantees
Wenying Deng
Beau Coker
Rajarshi Mukherjee
J. Liu
B. Coull
19
2
0
15 Apr 2022
Single-level Adversarial Data Synthesis based on Neural Tangent Kernels
Single-level Adversarial Data Synthesis based on Neural Tangent Kernels
Yu-Rong Zhang
Ruei-Yang Su
Sheng-Yen Chou
Shan Wu
GAN
15
2
0
08 Apr 2022
Bayesian Deep Learning with Multilevel Trace-class Neural Networks
Bayesian Deep Learning with Multilevel Trace-class Neural Networks
Neil K. Chada
Ajay Jasra
K. Law
Sumeetpal S. Singh
BDL
UQCV
83
3
0
24 Mar 2022
Previous
12345678
Next