ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1804.11271
  4. Cited By
Gaussian Process Behaviour in Wide Deep Neural Networks

Gaussian Process Behaviour in Wide Deep Neural Networks

30 April 2018
A. G. Matthews
Mark Rowland
Jiri Hron
Richard Turner
Zoubin Ghahramani
    BDL
ArXivPDFHTML

Papers citing "Gaussian Process Behaviour in Wide Deep Neural Networks"

50 / 391 papers shown
Title
Scale Mixtures of Neural Network Gaussian Processes
Scale Mixtures of Neural Network Gaussian Processes
Hyungi Lee
Eunggu Yun
Hongseok Yang
Juho Lee
UQCV
BDL
13
7
0
03 Jul 2021
Implicit Acceleration and Feature Learning in Infinitely Wide Neural
  Networks with Bottlenecks
Implicit Acceleration and Feature Learning in Infinitely Wide Neural Networks with Bottlenecks
Etai Littwin
Omid Saremi
Shuangfei Zhai
Vimal Thilak
Hanlin Goh
J. Susskind
Greg Yang
33
3
0
01 Jul 2021
Saddle-to-Saddle Dynamics in Deep Linear Networks: Small Initialization
  Training, Symmetry, and Sparsity
Saddle-to-Saddle Dynamics in Deep Linear Networks: Small Initialization Training, Symmetry, and Sparsity
Arthur Jacot
François Ged
Berfin cSimcsek
Clément Hongler
Franck Gabriel
35
52
0
30 Jun 2021
$α$-Stable convergence of heavy-tailed infinitely-wide neural
  networks
ααα-Stable convergence of heavy-tailed infinitely-wide neural networks
Paul Jung
Hoileong Lee
Jiho Lee
Hongseok Yang
16
5
0
18 Jun 2021
Wide stochastic networks: Gaussian limit and PAC-Bayesian training
Wide stochastic networks: Gaussian limit and PAC-Bayesian training
Eugenio Clerico
George Deligiannidis
Arnaud Doucet
25
12
0
17 Jun 2021
Locality defeats the curse of dimensionality in convolutional
  teacher-student scenarios
Locality defeats the curse of dimensionality in convolutional teacher-student scenarios
Alessandro Favero
Francesco Cagnetta
M. Wyart
30
31
0
16 Jun 2021
How to Train Your Wide Neural Network Without Backprop: An Input-Weight
  Alignment Perspective
How to Train Your Wide Neural Network Without Backprop: An Input-Weight Alignment Perspective
Akhilan Boopathy
Ila Fiete
44
9
0
15 Jun 2021
Scaling Neural Tangent Kernels via Sketching and Random Features
Scaling Neural Tangent Kernels via Sketching and Random Features
A. Zandieh
Insu Han
H. Avron
N. Shoham
Chaewon Kim
Jinwoo Shin
11
31
0
15 Jun 2021
Wide Mean-Field Variational Bayesian Neural Networks Ignore the Data
Wide Mean-Field Variational Bayesian Neural Networks Ignore the Data
Beau Coker
Weiwei Pan
Finale Doshi-Velez
BDL
19
9
0
13 Jun 2021
Precise characterization of the prior predictive distribution of deep
  ReLU networks
Precise characterization of the prior predictive distribution of deep ReLU networks
Lorenzo Noci
Gregor Bachmann
Kevin Roth
Sebastian Nowozin
Thomas Hofmann
BDL
UQCV
29
32
0
11 Jun 2021
The Limitations of Large Width in Neural Networks: A Deep Gaussian
  Process Perspective
The Limitations of Large Width in Neural Networks: A Deep Gaussian Process Perspective
Geoff Pleiss
John P. Cunningham
28
24
0
11 Jun 2021
Measuring the robustness of Gaussian processes to kernel choice
Measuring the robustness of Gaussian processes to kernel choice
William T. Stephenson
S. Ghosh
Tin D. Nguyen
Mikhail Yurochkin
Sameer K. Deshpande
Tamara Broderick
GP
14
11
0
11 Jun 2021
A self consistent theory of Gaussian Processes captures feature learning
  effects in finite CNNs
A self consistent theory of Gaussian Processes captures feature learning effects in finite CNNs
Gadi Naveh
Zohar Ringel
SSL
MLT
36
31
0
08 Jun 2021
The Future is Log-Gaussian: ResNets and Their Infinite-Depth-and-Width
  Limit at Initialization
The Future is Log-Gaussian: ResNets and Their Infinite-Depth-and-Width Limit at Initialization
Mufan Li
Mihai Nica
Daniel M. Roy
35
33
0
07 Jun 2021
Batch Normalization Orthogonalizes Representations in Deep Random
  Networks
Batch Normalization Orthogonalizes Representations in Deep Random Networks
Hadi Daneshmand
Amir Joudaki
Francis R. Bach
OOD
13
37
0
07 Jun 2021
Reverse Engineering the Neural Tangent Kernel
Reverse Engineering the Neural Tangent Kernel
James B. Simon
Sajant Anand
M. DeWeese
35
9
0
06 Jun 2021
Regularization in ResNet with Stochastic Depth
Regularization in ResNet with Stochastic Depth
Soufiane Hayou
Fadhel Ayed
19
10
0
06 Jun 2021
Symmetry-via-Duality: Invariant Neural Network Densities from
  Parameter-Space Correlators
Symmetry-via-Duality: Invariant Neural Network Densities from Parameter-Space Correlators
Anindita Maiti
Keegan Stoner
James Halverson
23
20
0
01 Jun 2021
Asymptotics of representation learning in finite Bayesian neural
  networks
Asymptotics of representation learning in finite Bayesian neural networks
Jacob A. Zavatone-Veth
Abdulkadir Canatar
Benjamin S. Ruben
Cengiz Pehlevan
21
31
0
01 Jun 2021
Sparse Uncertainty Representation in Deep Learning with Inducing Weights
Sparse Uncertainty Representation in Deep Learning with Inducing Weights
H. Ritter
Martin Kukla
Chen Zhang
Yingzhen Li
UQCV
BDL
60
17
0
30 May 2021
Activation function design for deep networks: linearity and effective
  initialisation
Activation function design for deep networks: linearity and effective initialisation
Michael Murray
V. Abrol
Jared Tanner
ODL
LLMSV
29
18
0
17 May 2021
Posterior contraction for deep Gaussian process priors
Posterior contraction for deep Gaussian process priors
G. Finocchio
Johannes Schmidt-Hieber
35
11
0
16 May 2021
Priors in Bayesian Deep Learning: A Review
Priors in Bayesian Deep Learning: A Review
Vincent Fortuin
UQCV
BDL
33
124
0
14 May 2021
Deep Neural Networks as Point Estimates for Deep Gaussian Processes
Deep Neural Networks as Point Estimates for Deep Gaussian Processes
Vincent Dutordoir
J. Hensman
Mark van der Wilk
Carl Henrik Ek
Zoubin Ghahramani
N. Durrande
BDL
UQCV
18
30
0
10 May 2021
Tensor Programs IIb: Architectural Universality of Neural Tangent Kernel
  Training Dynamics
Tensor Programs IIb: Architectural Universality of Neural Tangent Kernel Training Dynamics
Greg Yang
Etai Littwin
22
63
0
08 May 2021
Analyzing Monotonic Linear Interpolation in Neural Network Loss
  Landscapes
Analyzing Monotonic Linear Interpolation in Neural Network Loss Landscapes
James Lucas
Juhan Bae
Michael Ruogu Zhang
Stanislav Fort
R. Zemel
Roger C. Grosse
MoMe
172
28
0
22 Apr 2021
Adversarial Robustness Guarantees for Gaussian Processes
Adversarial Robustness Guarantees for Gaussian Processes
A. Patané
Arno Blaas
Luca Laurenti
L. Cardelli
Stephen J. Roberts
Marta Z. Kwiatkowska
GP
AAML
98
9
0
07 Apr 2021
Learning with Neural Tangent Kernels in Near Input Sparsity Time
Learning with Neural Tangent Kernels in Near Input Sparsity Time
A. Zandieh
9
0
0
01 Apr 2021
Weighted Neural Tangent Kernel: A Generalized and Improved
  Network-Induced Kernel
Weighted Neural Tangent Kernel: A Generalized and Improved Network-Induced Kernel
Lei Tan
Shutong Wu
Xiaolin Huang
26
1
0
22 Mar 2021
Why flatness does and does not correlate with generalization for deep
  neural networks
Why flatness does and does not correlate with generalization for deep neural networks
Shuo Zhang
Isaac Reid
Guillermo Valle Pérez
A. Louis
19
8
0
10 Mar 2021
Towards Deepening Graph Neural Networks: A GNTK-based Optimization
  Perspective
Towards Deepening Graph Neural Networks: A GNTK-based Optimization Perspective
Wei Huang
Yayong Li
Weitao Du
Jie Yin
R. Xu
Ling-Hao Chen
Miao Zhang
26
17
0
03 Mar 2021
Classifying high-dimensional Gaussian mixtures: Where kernel methods
  fail and neural networks succeed
Classifying high-dimensional Gaussian mixtures: Where kernel methods fail and neural networks succeed
Maria Refinetti
Sebastian Goldt
Florent Krzakala
Lenka Zdeborová
22
72
0
23 Feb 2021
Large-width functional asymptotics for deep Gaussian neural networks
Large-width functional asymptotics for deep Gaussian neural networks
Daniele Bracale
Stefano Favaro
S. Fortini
Stefano Peluchetti
20
18
0
20 Feb 2021
Quantum field-theoretic machine learning
Quantum field-theoretic machine learning
Dimitrios Bachtis
Gert Aarts
B. Lucini
AI4CE
19
28
0
18 Feb 2021
Non-asymptotic approximations of neural networks by Gaussian processes
Non-asymptotic approximations of neural networks by Gaussian processes
Ronen Eldan
Dan Mikulincer
T. Schramm
41
24
0
17 Feb 2021
Cross-modal Adversarial Reprogramming
Cross-modal Adversarial Reprogramming
Paarth Neekhara
Shehzeen Samarah Hussain
Jinglong Du
Shlomo Dubnov
F. Koushanfar
Julian McAuley
13
35
0
15 Feb 2021
Double-descent curves in neural networks: a new perspective using
  Gaussian processes
Double-descent curves in neural networks: a new perspective using Gaussian processes
Ouns El Harzli
Bernardo Cuenca Grau
Guillermo Valle Pérez
A. Louis
22
6
0
14 Feb 2021
Explaining Neural Scaling Laws
Explaining Neural Scaling Laws
Yasaman Bahri
Ethan Dyer
Jared Kaplan
Jaehoon Lee
Utkarsh Sharma
27
250
0
12 Feb 2021
Bayesian Neural Network Priors Revisited
Bayesian Neural Network Priors Revisited
Vincent Fortuin
Adrià Garriga-Alonso
Sebastian W. Ober
F. Wenzel
Gunnar Rätsch
Richard Turner
Mark van der Wilk
Laurence Aitchison
BDL
UQCV
67
138
0
12 Feb 2021
Reducing the Amortization Gap in Variational Autoencoders: A Bayesian
  Random Function Approach
Reducing the Amortization Gap in Variational Autoencoders: A Bayesian Random Function Approach
Minyoung Kim
Vladimir Pavlovic
BDL
31
6
0
05 Feb 2021
Faster Kernel Interpolation for Gaussian Processes
Faster Kernel Interpolation for Gaussian Processes
Mohit Yadav
Daniel Sheldon
Cameron Musco
BDL
13
10
0
28 Jan 2021
Implicit Bias of Linear RNNs
Implicit Bias of Linear RNNs
M Motavali Emami
Mojtaba Sahraee-Ardakan
Parthe Pandit
S. Rangan
A. Fletcher
20
11
0
19 Jan 2021
Correlated Weights in Infinite Limits of Deep Convolutional Neural
  Networks
Correlated Weights in Infinite Limits of Deep Convolutional Neural Networks
Adrià Garriga-Alonso
Mark van der Wilk
14
4
0
11 Jan 2021
Infinitely Wide Tensor Networks as Gaussian Process
Infinitely Wide Tensor Networks as Gaussian Process
Erdong Guo
D. Draper
19
2
0
07 Jan 2021
Perspective: A Phase Diagram for Deep Learning unifying Jamming, Feature
  Learning and Lazy Training
Perspective: A Phase Diagram for Deep Learning unifying Jamming, Feature Learning and Lazy Training
Mario Geiger
Leonardo Petrini
M. Wyart
DRL
31
11
0
30 Dec 2020
Trace-class Gaussian priors for Bayesian learning of neural networks
  with MCMC
Trace-class Gaussian priors for Bayesian learning of neural networks with MCMC
Torben Sell
Sumeetpal S. Singh
BDL
23
5
0
20 Dec 2020
Analyzing Finite Neural Networks: Can We Trust Neural Tangent Kernel
  Theory?
Analyzing Finite Neural Networks: Can We Trust Neural Tangent Kernel Theory?
Mariia Seleznova
Gitta Kutyniok
AAML
24
29
0
08 Dec 2020
Generalization bounds for deep learning
Generalization bounds for deep learning
Guillermo Valle Pérez
A. Louis
BDL
13
44
0
07 Dec 2020
All You Need is a Good Functional Prior for Bayesian Deep Learning
All You Need is a Good Functional Prior for Bayesian Deep Learning
Ba-Hien Tran
Simone Rossi
Dimitrios Milios
Maurizio Filippone
OOD
BDL
31
56
0
25 Nov 2020
Neural Network Gaussian Process Considering Input Uncertainty for
  Composite Structures Assembly
Neural Network Gaussian Process Considering Input Uncertainty for Composite Structures Assembly
Cheolhei Lee
Jianguo Wu
Wei Cao
Xiaowei Yue
6
19
0
21 Nov 2020
Previous
12345678
Next