Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1711.00165
Cited By
Deep Neural Networks as Gaussian Processes
1 November 2017
Jaehoon Lee
Yasaman Bahri
Roman Novak
S. Schoenholz
Jeffrey Pennington
Jascha Narain Sohl-Dickstein
UQCV
BDL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Deep Neural Networks as Gaussian Processes"
50 / 692 papers shown
Title
Learning curves for Gaussian process regression with power-law priors and targets
Hui Jin
P. Banerjee
Guido Montúfar
16
17
0
23 Oct 2021
Using scientific machine learning for experimental bifurcation analysis of dynamic systems
S. Beregi
David A.W. Barton
D. Rezgui
S. Neild
AI4CE
50
19
0
22 Oct 2021
Feature Learning and Signal Propagation in Deep Neural Networks
Yizhang Lou
Chris Mingard
Yoonsoo Nam
Soufiane Hayou
MDE
29
17
0
22 Oct 2021
Self-supervised denoising for massive noisy images
Feng Wang
Trond R. Henninen
D. Keller
R. Erni
13
0
0
18 Oct 2021
Centroid Approximation for Bootstrap: Improving Particle Quality at Inference
Mao Ye
Qiang Liu
27
1
0
17 Oct 2021
Training Neural Networks for Solving 1-D Optimal Piecewise Linear Approximation
Hangcheng Dong
Jing-Xiao Liao
Yan Wang
Yixin Chen
Bingguo Liu
Dong Ye
Guodong Liu
152
0
0
14 Oct 2021
Implicit Bias of Linear Equivariant Networks
Hannah Lawrence
Kristian Georgiev
A. Dienes
B. Kiani
AI4CE
45
14
0
12 Oct 2021
On out-of-distribution detection with Bayesian neural networks
Francesco DÁngelo
Christian Henning
BDL
UQCV
29
6
0
12 Oct 2021
Imitating Deep Learning Dynamics via Locally Elastic Stochastic Differential Equations
Jiayao Zhang
Hua Wang
Weijie J. Su
35
8
0
11 Oct 2021
Kernel Interpolation as a Bayes Point Machine
Jeremy Bernstein
Alexander R. Farhang
Yisong Yue
BDL
34
4
0
08 Oct 2021
New Insights into Graph Convolutional Networks using Neural Tangent Kernels
Mahalakshmi Sabanayagam
P. Esser
D. Ghoshdastidar
29
6
0
08 Oct 2021
The Eigenlearning Framework: A Conservation Law Perspective on Kernel Regression and Wide Neural Networks
James B. Simon
Madeline Dickens
Dhruva Karkada
M. DeWeese
52
27
0
08 Oct 2021
Bayesian neural network unit priors and generalized Weibull-tail property
M. Vladimirova
Julyan Arbel
Stéphane Girard
BDL
54
9
0
06 Oct 2021
On the Impact of Stable Ranks in Deep Nets
B. Georgiev
L. Franken
Mayukh Mukherjee
Georgios Arvanitidis
23
3
0
05 Oct 2021
On the Correspondence between Gaussian Processes and Geometric Harmonics
Felix Dietrich
J. M. Bello-Rivas
Ioannis G. Kevrekidis
34
3
0
05 Oct 2021
Random matrices in service of ML footprint: ternary random features with no performance loss
Hafiz Tiomoko Ali
Zhenyu Liao
Romain Couillet
49
7
0
05 Oct 2021
Learning through atypical "phase transitions" in overparameterized neural networks
Carlo Baldassi
Clarissa Lauditi
Enrico M. Malatesta
R. Pacelli
Gabriele Perugini
R. Zecchina
39
26
0
01 Oct 2021
The edge of chaos: quantum field theory and deep neural networks
Kevin T. Grosvenor
R. Jefferson
43
22
0
27 Sep 2021
Understanding neural networks with reproducing kernel Banach spaces
Francesca Bartolucci
Ernesto De Vito
Lorenzo Rosasco
Stefano Vigogna
52
50
0
20 Sep 2021
Trust Your Robots! Predictive Uncertainty Estimation of Neural Networks with Sparse Gaussian Processes
Jongseo Lee
Jianxiang Feng
Matthias Humt
M. Müller
Rudolph Triebel
UQCV
53
21
0
20 Sep 2021
Deformed semicircle law and concentration of nonlinear random matrices for ultra-wide neural networks
Zhichao Wang
Yizhe Zhu
40
18
0
20 Sep 2021
Uniform Generalization Bounds for Overparameterized Neural Networks
Sattar Vakili
Michael Bromberg
Jezabel R. Garcia
Da-Shan Shiu
A. Bernacchia
35
19
0
13 Sep 2021
Large-Scale Learning with Fourier Features and Tensor Decompositions
Frederiek Wesel
Kim Batselier
25
11
0
03 Sep 2021
A theory of representation learning gives a deep generalisation of kernel methods
Adam X. Yang
Maxime Robeyns
Edward Milsom
Ben Anson
Nandi Schoots
Laurence Aitchison
BDL
37
10
0
30 Aug 2021
Neural Network Gaussian Processes by Increasing Depth
Shao-Qun Zhang
Fei Wang
Feng-lei Fan
11
7
0
29 Aug 2021
Shift-Curvature, SGD, and Generalization
Arwen V. Bradley
C. Gomez-Uribe
Manish Reddy Vuyyuru
35
2
0
21 Aug 2021
Nonperturbative renormalization for the neural network-QFT correspondence
Harold Erbin
Vincent Lahoche
D. O. Samary
46
30
0
03 Aug 2021
Deep Stable neural networks: large-width asymptotics and convergence rates
Stefano Favaro
S. Fortini
Stefano Peluchetti
BDL
35
14
0
02 Aug 2021
Dataset Distillation with Infinitely Wide Convolutional Networks
Timothy Nguyen
Roman Novak
Lechao Xiao
Jaehoon Lee
DD
51
231
0
27 Jul 2021
Are Bayesian neural networks intrinsically good at out-of-distribution detection?
Christian Henning
Francesco DÁngelo
Benjamin Grewe
UQCV
BDL
31
10
0
26 Jul 2021
A brief note on understanding neural networks as Gaussian processes
Mengwu Guo
BDL
GP
22
2
0
25 Jul 2021
A variational approximate posterior for the deep Wishart process
Sebastian W. Ober
Laurence Aitchison
BDL
27
11
0
21 Jul 2021
The Limiting Dynamics of SGD: Modified Loss, Phase Space Oscillations, and Anomalous Diffusion
D. Kunin
Javier Sagastuy-Breña
Lauren Gillespie
Eshed Margalit
Hidenori Tanaka
Surya Ganguli
Daniel L. K. Yamins
36
16
0
19 Jul 2021
Epistemic Neural Networks
Ian Osband
Zheng Wen
M. Asghari
Vikranth Dwaracherla
M. Ibrahimi
Xiyuan Lu
Benjamin Van Roy
UQCV
BDL
32
99
0
19 Jul 2021
Understanding the Distributions of Aggregation Layers in Deep Neural Networks
Eng-Jon Ong
S. Husain
M. Bober
FAtt
FedML
AI4CE
11
2
0
09 Jul 2021
Logit-based Uncertainty Measure in Classification
Huiyue Wu
Diego Klabjan
EDL
BDL
UQCV
20
6
0
06 Jul 2021
Random Neural Networks in the Infinite Width Limit as Gaussian Processes
Boris Hanin
BDL
37
44
0
04 Jul 2021
Scale Mixtures of Neural Network Gaussian Processes
Hyungi Lee
Eunggu Yun
Hongseok Yang
Juho Lee
UQCV
BDL
21
7
0
03 Jul 2021
Subspace Clustering Based Analysis of Neural Networks
Uday Singh Saini
Pravallika Devineni
Evangelos E. Papalexakis
GNN
22
1
0
02 Jul 2021
Implicit Acceleration and Feature Learning in Infinitely Wide Neural Networks with Bottlenecks
Etai Littwin
Omid Saremi
Shuangfei Zhai
Vimal Thilak
Hanlin Goh
J. Susskind
Greg Yang
33
3
0
01 Jul 2021
Saddle-to-Saddle Dynamics in Deep Linear Networks: Small Initialization Training, Symmetry, and Sparsity
Arthur Jacot
François Ged
Berfin cSimcsek
Clément Hongler
Franck Gabriel
35
52
0
30 Jun 2021
Repulsive Deep Ensembles are Bayesian
Francesco DÁngelo
Vincent Fortuin
UQCV
BDL
64
95
0
22 Jun 2021
Deep Gaussian Processes: A Survey
Kalvik Jakkala
AI4CE
GP
BDL
29
19
0
21 Jun 2021
Scalable Safety-Critical Policy Evaluation with Accelerated Rare Event Sampling
Mengdi Xu
Peide Huang
Fengpei Li
Jiacheng Zhu
Xuewei Qi
K. Oguchi
Zhiyuan Huang
Henry Lam
Ding Zhao
16
4
0
19 Jun 2021
α
α
α
-Stable convergence of heavy-tailed infinitely-wide neural networks
Paul Jung
Hoileong Lee
Jiho Lee
Hongseok Yang
21
5
0
18 Jun 2021
Wide stochastic networks: Gaussian limit and PAC-Bayesian training
Eugenio Clerico
George Deligiannidis
Arnaud Doucet
25
12
0
17 Jun 2021
Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Training and Effective Adaptation
Haoxiang Wang
Han Zhao
Bo Li
37
88
0
16 Jun 2021
Locality defeats the curse of dimensionality in convolutional teacher-student scenarios
Alessandro Favero
Francesco Cagnetta
Matthieu Wyart
35
31
0
16 Jun 2021
How to Train Your Wide Neural Network Without Backprop: An Input-Weight Alignment Perspective
Akhilan Boopathy
Ila Fiete
44
9
0
15 Jun 2021
Scaling Neural Tangent Kernels via Sketching and Random Features
A. Zandieh
Insu Han
H. Avron
N. Shoham
Chaewon Kim
Jinwoo Shin
16
31
0
15 Jun 2021
Previous
1
2
3
...
7
8
9
...
12
13
14
Next