ResearchTrend.AI
  • Papers
  • Communities
  • Organizations
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1711.00165
  4. Cited By
Deep Neural Networks as Gaussian Processes
v1v2v3 (latest)

Deep Neural Networks as Gaussian Processes

1 November 2017
Jaehoon Lee
Yasaman Bahri
Roman Novak
S. Schoenholz
Jeffrey Pennington
Jascha Narain Sohl-Dickstein
    UQCVBDL
ArXiv (abs)PDFHTML

Papers citing "Deep Neural Networks as Gaussian Processes"

50 / 696 papers shown
Title
Wide stochastic networks: Gaussian limit and PAC-Bayesian training
Wide stochastic networks: Gaussian limit and PAC-Bayesian training
Eugenio Clerico
George Deligiannidis
Arnaud Doucet
113
12
0
17 Jun 2021
Bridging Multi-Task Learning and Meta-Learning: Towards Efficient
  Training and Effective Adaptation
Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Training and Effective Adaptation
Haoxiang Wang
Han Zhao
Yue Liu
112
90
0
16 Jun 2021
Locality defeats the curse of dimensionality in convolutional
  teacher-student scenarios
Locality defeats the curse of dimensionality in convolutional teacher-student scenarios
Alessandro Favero
Francesco Cagnetta
Matthieu Wyart
108
31
0
16 Jun 2021
How to Train Your Wide Neural Network Without Backprop: An Input-Weight
  Alignment Perspective
How to Train Your Wide Neural Network Without Backprop: An Input-Weight Alignment Perspective
Akhilan Boopathy
Ila Fiete
108
10
0
15 Jun 2021
Scaling Neural Tangent Kernels via Sketching and Random Features
Scaling Neural Tangent Kernels via Sketching and Random Features
A. Zandieh
Insu Han
H. Avron
N. Shoham
Chaewon Kim
Jinwoo Shin
84
32
0
15 Jun 2021
Precise characterization of the prior predictive distribution of deep
  ReLU networks
Precise characterization of the prior predictive distribution of deep ReLU networks
Lorenzo Noci
Gregor Bachmann
Kevin Roth
Sebastian Nowozin
Thomas Hofmann
BDLUQCV
97
33
0
11 Jun 2021
The Limitations of Large Width in Neural Networks: A Deep Gaussian
  Process Perspective
The Limitations of Large Width in Neural Networks: A Deep Gaussian Process Perspective
Geoff Pleiss
John P. Cunningham
79
27
0
11 Jun 2021
Measuring the robustness of Gaussian processes to kernel choice
Measuring the robustness of Gaussian processes to kernel choice
William T. Stephenson
S. Ghosh
Tin D. Nguyen
Mikhail Yurochkin
Sameer K. Deshpande
Tamara Broderick
GP
48
11
0
11 Jun 2021
Learning effective stochastic differential equations from microscopic
  simulations: linking stochastic numerics to deep learning
Learning effective stochastic differential equations from microscopic simulations: linking stochastic numerics to deep learning
Felix Dietrich
Alexei Makeev
George A. Kevrekidis
N. Evangelou
Tom S. Bertalan
Sebastian Reich
Ioannis G. Kevrekidis
DiffM
98
38
0
10 Jun 2021
A Neural Tangent Kernel Perspective of GANs
A Neural Tangent Kernel Perspective of GANs
Jean-Yves Franceschi
Emmanuel de Bézenac
Ibrahim Ayed
Mickaël Chen
Sylvain Lamprier
Patrick Gallinari
131
27
0
10 Jun 2021
Ghosts in Neural Networks: Existence, Structure and Role of
  Infinite-Dimensional Null Space
Ghosts in Neural Networks: Existence, Structure and Role of Infinite-Dimensional Null Space
Sho Sonoda
Isao Ishikawa
Masahiro Ikeda
BDL
68
9
0
09 Jun 2021
A self consistent theory of Gaussian Processes captures feature learning
  effects in finite CNNs
A self consistent theory of Gaussian Processes captures feature learning effects in finite CNNs
Gadi Naveh
Zohar Ringel
SSLMLT
99
33
0
08 Jun 2021
Learning Functional Priors and Posteriors from Data and Physics
Learning Functional Priors and Posteriors from Data and Physics
Xuhui Meng
Liu Yang
Zhiping Mao
J. Ferrandis
George Karniadakis
AI4CE
180
61
0
08 Jun 2021
The Future is Log-Gaussian: ResNets and Their Infinite-Depth-and-Width
  Limit at Initialization
The Future is Log-Gaussian: ResNets and Their Infinite-Depth-and-Width Limit at Initialization
Mufan Li
Mihai Nica
Daniel M. Roy
122
34
0
07 Jun 2021
Batch Normalization Orthogonalizes Representations in Deep Random
  Networks
Batch Normalization Orthogonalizes Representations in Deep Random Networks
Hadi Daneshmand
Amir Joudaki
Francis R. Bach
OOD
73
37
0
07 Jun 2021
Reverse Engineering the Neural Tangent Kernel
Reverse Engineering the Neural Tangent Kernel
James B. Simon
Sajant Anand
M. DeWeese
106
9
0
06 Jun 2021
Out-of-Distribution Generalization in Kernel Regression
Out-of-Distribution Generalization in Kernel Regression
Abdulkadir Canatar
Blake Bordelon
Cengiz Pehlevan
OODDOOD
90
13
0
04 Jun 2021
Symmetry-via-Duality: Invariant Neural Network Densities from
  Parameter-Space Correlators
Symmetry-via-Duality: Invariant Neural Network Densities from Parameter-Space Correlators
Anindita Maiti
Keegan Stoner
James Halverson
82
20
0
01 Jun 2021
Asymptotics of representation learning in finite Bayesian neural
  networks
Asymptotics of representation learning in finite Bayesian neural networks
Jacob A. Zavatone-Veth
Abdulkadir Canatar
Benjamin S. Ruben
Cengiz Pehlevan
117
29
0
01 Jun 2021
Generalization Error Rates in Kernel Regression: The Crossover from the
  Noiseless to Noisy Regime
Generalization Error Rates in Kernel Regression: The Crossover from the Noiseless to Noisy Regime
Hugo Cui
Bruno Loureiro
Florent Krzakala
Lenka Zdeborová
101
85
0
31 May 2021
Sparse Uncertainty Representation in Deep Learning with Inducing Weights
Sparse Uncertainty Representation in Deep Learning with Inducing Weights
H. Ritter
Martin Kukla
Chen Zhang
Yingzhen Li
UQCVBDL
94
17
0
30 May 2021
Active Learning in Bayesian Neural Networks with Balanced Entropy
  Learning Principle
Active Learning in Bayesian Neural Networks with Balanced Entropy Learning Principle
J. Woo
136
11
0
30 May 2021
Activation function design for deep networks: linearity and effective
  initialisation
Activation function design for deep networks: linearity and effective initialisation
Michael Murray
V. Abrol
Jared Tanner
ODLLLMSV
66
18
0
17 May 2021
Priors in Bayesian Deep Learning: A Review
Priors in Bayesian Deep Learning: A Review
Vincent Fortuin
UQCVBDL
141
134
0
14 May 2021
Sparsity-Probe: Analysis tool for Deep Learning Models
Sparsity-Probe: Analysis tool for Deep Learning Models
Ido Ben-Shaul
S. Dekel
46
4
0
14 May 2021
Deep Neural Networks as Point Estimates for Deep Gaussian Processes
Deep Neural Networks as Point Estimates for Deep Gaussian Processes
Vincent Dutordoir
J. Hensman
Mark van der Wilk
Carl Henrik Ek
Zoubin Ghahramani
N. Durrande
BDLUQCV
124
31
0
10 May 2021
Tensor Programs IIb: Architectural Universality of Neural Tangent Kernel
  Training Dynamics
Tensor Programs IIb: Architectural Universality of Neural Tangent Kernel Training Dynamics
Greg Yang
Etai Littwin
85
67
0
08 May 2021
Adaptive Latent Space Tuning for Non-Stationary Distributions
Adaptive Latent Space Tuning for Non-Stationary Distributions
A. Scheinker
F. Cropp
S. Paiagua
D. Filippetto
OOD
121
3
0
08 May 2021
Uniform Convergence, Adversarial Spheres and a Simple Remedy
Uniform Convergence, Adversarial Spheres and a Simple Remedy
Gregor Bachmann
Seyed-Mohsen Moosavi-Dezfooli
Thomas Hofmann
AAML
47
8
0
07 May 2021
Uncertainty-Aware Boosted Ensembling in Multi-Modal Settings
Uncertainty-Aware Boosted Ensembling in Multi-Modal Settings
U. Sarawgi
Rishab Khincha
W. Zulfikar
Satrajit S. Ghosh
Pattie Maes
UQCV
60
7
0
21 Apr 2021
How rotational invariance of common kernels prevents generalization in
  high dimensions
How rotational invariance of common kernels prevents generalization in high dimensions
Konstantin Donhauser
Mingqi Wu
Fanny Yang
92
24
0
09 Apr 2021
Artificial Neural Network Modeling for Airline Disruption Management
Artificial Neural Network Modeling for Airline Disruption Management
Kolawole E. Ogunsina
Wendy A. Okolo
24
5
0
05 Apr 2021
Learning with Neural Tangent Kernels in Near Input Sparsity Time
Learning with Neural Tangent Kernels in Near Input Sparsity Time
A. Zandieh
78
0
0
01 Apr 2021
Initializing ReLU networks in an expressive subspace of weights
Initializing ReLU networks in an expressive subspace of weights
Dayal Singh
J. SreejithG
47
4
0
23 Mar 2021
Weighted Neural Tangent Kernel: A Generalized and Improved
  Network-Induced Kernel
Weighted Neural Tangent Kernel: A Generalized and Improved Network-Induced Kernel
Lei Tan
Shutong Wu
Xiaolin Huang
42
2
0
22 Mar 2021
Data-driven Aerodynamic Analysis of Structures using Gaussian Processes
Data-driven Aerodynamic Analysis of Structures using Gaussian Processes
I. Kavrakov
A. McRobie
Guido Morgenthal
48
13
0
20 Mar 2021
Why flatness does and does not correlate with generalization for deep
  neural networks
Why flatness does and does not correlate with generalization for deep neural networks
Shuo Zhang
Isaac Reid
Guillermo Valle Pérez
A. Louis
79
8
0
10 Mar 2021
Towards Deepening Graph Neural Networks: A GNTK-based Optimization
  Perspective
Towards Deepening Graph Neural Networks: A GNTK-based Optimization Perspective
Wei Huang
Yayong Li
Weitao Du
Jie Yin
R. Xu
Ling-Hao Chen
Miao Zhang
87
17
0
03 Mar 2021
Computing the Information Content of Trained Neural Networks
Computing the Information Content of Trained Neural Networks
Jeremy Bernstein
Yisong Yue
75
4
0
01 Mar 2021
Multi-fidelity regression using artificial neural networks: efficient
  approximation of parameter-dependent output quantities
Multi-fidelity regression using artificial neural networks: efficient approximation of parameter-dependent output quantities
Mengwu Guo
Andrea Manzoni
Maurice Amendt
Paolo Conti
J. Hesthaven
184
98
0
26 Feb 2021
Neural Generalization of Multiple Kernel Learning
Neural Generalization of Multiple Kernel Learning
Ahamad Navid Ghanizadeh
Kamaledin Ghiasi-Shirazi
R. Monsefi
Mohammadreza Qaraei
39
2
0
26 Feb 2021
Classifying high-dimensional Gaussian mixtures: Where kernel methods
  fail and neural networks succeed
Classifying high-dimensional Gaussian mixtures: Where kernel methods fail and neural networks succeed
Maria Refinetti
Sebastian Goldt
Florent Krzakala
Lenka Zdeborová
103
74
0
23 Feb 2021
Large-width functional asymptotics for deep Gaussian neural networks
Large-width functional asymptotics for deep Gaussian neural networks
Daniele Bracale
Stefano Favaro
S. Fortini
Stefano Peluchetti
70
17
0
20 Feb 2021
Approximation and Learning with Deep Convolutional Models: a Kernel
  Perspective
Approximation and Learning with Deep Convolutional Models: a Kernel Perspective
A. Bietti
98
30
0
19 Feb 2021
WGAN with an Infinitely Wide Generator Has No Spurious Stationary Points
WGAN with an Infinitely Wide Generator Has No Spurious Stationary Points
Albert No
Taeho Yoon
Sehyun Kwon
Ernest K. Ryu
GAN
58
2
0
15 Feb 2021
Cross-modal Adversarial Reprogramming
Cross-modal Adversarial Reprogramming
Paarth Neekhara
Shehzeen Samarah Hussain
Jinglong Du
Shlomo Dubnov
F. Koushanfar
Julian McAuley
125
36
0
15 Feb 2021
Double-descent curves in neural networks: a new perspective using
  Gaussian processes
Double-descent curves in neural networks: a new perspective using Gaussian processes
Ouns El Harzli
Bernardo Cuenca Grau
Guillermo Valle Pérez
A. Louis
108
7
0
14 Feb 2021
Explaining Neural Scaling Laws
Explaining Neural Scaling Laws
Yasaman Bahri
Ethan Dyer
Jared Kaplan
Jaehoon Lee
Utkarsh Sharma
126
272
0
12 Feb 2021
Bayesian Uncertainty Estimation of Learned Variational MRI
  Reconstruction
Bayesian Uncertainty Estimation of Learned Variational MRI Reconstruction
Dominik Narnhofer
Alexander Effland
Erich Kobler
Kerstin Hammernik
Florian Knoll
Thomas Pock
UQCVBDL
77
51
0
12 Feb 2021
Bayesian Neural Network Priors Revisited
Bayesian Neural Network Priors Revisited
Vincent Fortuin
Adrià Garriga-Alonso
Sebastian W. Ober
F. Wenzel
Gunnar Rätsch
Richard Turner
Mark van der Wilk
Laurence Aitchison
BDLUQCV
139
141
0
12 Feb 2021
Previous
123...8910...121314
Next