ResearchTrend.AI
  • Papers
  • Communities
  • Organizations
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1711.00165
  4. Cited By
Deep Neural Networks as Gaussian Processes
v1v2v3 (latest)

Deep Neural Networks as Gaussian Processes

1 November 2017
Jaehoon Lee
Yasaman Bahri
Roman Novak
S. Schoenholz
Jeffrey Pennington
Jascha Narain Sohl-Dickstein
    UQCVBDL
ArXiv (abs)PDFHTML

Papers citing "Deep Neural Networks as Gaussian Processes"

50 / 696 papers shown
Title
Investigating Generalization by Controlling Normalized Margin
Investigating Generalization by Controlling Normalized Margin
Alexander R. Farhang
Jeremy Bernstein
Kushal Tirumala
Yang Liu
Yisong Yue
90
6
0
08 May 2022
Variational Inference for Nonlinear Inverse Problems via Neural Net
  Kernels: Comparison to Bayesian Neural Networks, Application to Topology
  Optimization
Variational Inference for Nonlinear Inverse Problems via Neural Net Kernels: Comparison to Bayesian Neural Networks, Application to Topology Optimization
Vahid Keshavarzzadeh
Robert M. Kirby
A. Narayan
BDL
69
2
0
07 May 2022
NeuralEF: Deconstructing Kernels by Deep Neural Networks
NeuralEF: Deconstructing Kernels by Deep Neural Networks
Zhijie Deng
Jiaxin Shi
Jun Zhu
158
19
0
30 Apr 2022
Deep Ensemble as a Gaussian Process Approximate Posterior
Deep Ensemble as a Gaussian Process Approximate Posterior
Zhijie Deng
Feng Zhou
Jianfei Chen
Guoqiang Wu
Jun Zhu
UQCV
58
5
0
30 Apr 2022
Convergence of neural networks to Gaussian mixture distribution
Convergence of neural networks to Gaussian mixture distribution
Yasuhiko Asao
Ryotaro Sakamoto
S. Takagi
BDL
75
2
0
26 Apr 2022
On Feature Learning in Neural Networks with Global Convergence
  Guarantees
On Feature Learning in Neural Networks with Global Convergence Guarantees
Zhengdao Chen
Eric Vanden-Eijnden
Joan Bruna
MLT
96
13
0
22 Apr 2022
Towards a Unified Framework for Uncertainty-aware Nonlinear Variable
  Selection with Theoretical Guarantees
Towards a Unified Framework for Uncertainty-aware Nonlinear Variable Selection with Theoretical Guarantees
Wenying Deng
Beau Coker
Rajarshi Mukherjee
J. Liu
B. Coull
65
2
0
15 Apr 2022
Single-level Adversarial Data Synthesis based on Neural Tangent Kernels
Single-level Adversarial Data Synthesis based on Neural Tangent Kernels
Yu-Rong Zhang
Ruei-Yang Su
Sheng-Yen Chou
Shan Wu
GAN
151
2
0
08 Apr 2022
Analytic theory for the dynamics of wide quantum neural networks
Analytic theory for the dynamics of wide quantum neural networks
Junyu Liu
K. Najafi
Kunal Sharma
F. Tacchino
Liang Jiang
Antonio Mezzacapo
92
53
0
30 Mar 2022
On the (Non-)Robustness of Two-Layer Neural Networks in Different
  Learning Regimes
On the (Non-)Robustness of Two-Layer Neural Networks in Different Learning Regimes
Elvis Dohmatob
A. Bietti
AAML
112
13
0
22 Mar 2022
Origami in N dimensions: How feed-forward networks manufacture linear
  separability
Origami in N dimensions: How feed-forward networks manufacture linear separability
Christian Keup
M. Helias
81
8
0
21 Mar 2022
A Framework and Benchmark for Deep Batch Active Learning for Regression
A Framework and Benchmark for Deep Batch Active Learning for Regression
David Holzmüller
Viktor Zaverkin
Johannes Kastner
Ingo Steinwart
UQCVBDLGP
145
37
0
17 Mar 2022
Scalable marginalization of correlated latent variables with
  applications to learning particle interaction kernels
Scalable marginalization of correlated latent variables with applications to learning particle interaction kernels
Mengyang Gu
Xubo Liu
X. Fang
Sui Tang
69
8
0
16 Mar 2022
On Connecting Deep Trigonometric Networks with Deep Gaussian Processes:
  Covariance, Expressivity, and Neural Tangent Kernel
On Connecting Deep Trigonometric Networks with Deep Gaussian Processes: Covariance, Expressivity, and Neural Tangent Kernel
Chi-Ken Lu
Patrick Shafto
BDL
101
0
0
14 Mar 2022
Quantitative Gaussian Approximation of Randomly Initialized Deep Neural
  Networks
Quantitative Gaussian Approximation of Randomly Initialized Deep Neural Networks
Andrea Basteri
Dario Trevisan
BDL
82
21
0
14 Mar 2022
Deep Regression Ensembles
Deep Regression Ensembles
Antoine Didisheim
Bryan Kelly
Semyon Malamud
UQCV
61
4
0
10 Mar 2022
Revealing the Excitation Causality between Climate and Political
  Violence via a Neural Forward-Intensity Poisson Process
Revealing the Excitation Causality between Climate and Political Violence via a Neural Forward-Intensity Poisson Process
S. Sun
Bailu Jin
Zhuangkun Wei
Weisi Guo
56
3
0
09 Mar 2022
Tensor Programs V: Tuning Large Neural Networks via Zero-Shot
  Hyperparameter Transfer
Tensor Programs V: Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer
Greg Yang
J. E. Hu
Igor Babuschkin
Szymon Sidor
Xiaodong Liu
David Farhi
Nick Ryder
J. Pachocki
Weizhu Chen
Jianfeng Gao
158
168
0
07 Mar 2022
Generalization Through The Lens Of Leave-One-Out Error
Generalization Through The Lens Of Leave-One-Out Error
Gregor Bachmann
Thomas Hofmann
Aurelien Lucchi
142
8
0
07 Mar 2022
An Analysis of Ensemble Sampling
An Analysis of Ensemble Sampling
Chao Qin
Zheng Wen
Xiuyuan Lu
Benjamin Van Roy
135
22
0
02 Mar 2022
Contrasting random and learned features in deep Bayesian linear
  regression
Contrasting random and learned features in deep Bayesian linear regression
Jacob A. Zavatone-Veth
William L. Tong
Cengiz Pehlevan
BDLMLT
139
28
0
01 Mar 2022
Explicitising The Implicit Intrepretability of Deep Neural Networks Via
  Duality
Explicitising The Implicit Intrepretability of Deep Neural Networks Via Duality
Chandrashekar Lakshminarayanan
Ashutosh Kumar Singh
A. Rajkumar
AI4CE
82
1
0
01 Mar 2022
Embedded Ensembles: Infinite Width Limit and Operating Regimes
Embedded Ensembles: Infinite Width Limit and Operating Regimes
Maksim Velikanov
Roma Kail
Ivan Anokhin
Roman Vashurin
Maxim Panov
Alexey Zaytsev
Dmitry Yarotsky
56
1
0
24 Feb 2022
A duality connecting neural network and cosmological dynamics
A duality connecting neural network and cosmological dynamics
Sven Krippendorf
M. Spannowsky
AI4CE
83
8
0
22 Feb 2022
UncertaINR: Uncertainty Quantification of End-to-End Implicit Neural
  Representations for Computed Tomography
UncertaINR: Uncertainty Quantification of End-to-End Implicit Neural Representations for Computed Tomography
Francisca Vasconcelos
Bobby He
Nalini Singh
Yee Whye Teh
BDLOODUQCV
84
13
0
22 Feb 2022
Learning from Randomly Initialized Neural Network Features
Learning from Randomly Initialized Neural Network Features
Ehsan Amid
Rohan Anil
W. Kotłowski
Manfred K. Warmuth
MLT
85
15
0
13 Feb 2022
Decomposing neural networks as mappings of correlation functions
Decomposing neural networks as mappings of correlation functions
Kirsten Fischer
Alexandre René
Christian Keup
Moritz Layer
David Dahmen
M. Helias
FAtt
107
15
0
10 Feb 2022
Multi-model Ensemble Analysis with Neural Network Gaussian Processes
Multi-model Ensemble Analysis with Neural Network Gaussian Processes
Trevor Harris
Yangqiu Song
Ryan Sriver
137
5
0
08 Feb 2022
Lossy Gradient Compression: How Much Accuracy Can One Bit Buy?
Lossy Gradient Compression: How Much Accuracy Can One Bit Buy?
Sadaf Salehkalaibar
Stefano Rini
FedML
68
4
0
06 Feb 2022
Learning Representation from Neural Fisher Kernel with Low-rank
  Approximation
Learning Representation from Neural Fisher Kernel with Low-rank Approximation
Ruixiang Zhang
Shuangfei Zhai
Etai Littwin
J. Susskind
SSL
92
3
0
04 Feb 2022
Deep Layer-wise Networks Have Closed-Form Weights
Chieh-Tsai Wu
A. Masoomi
Arthur Gretton
Jennifer Dy
71
3
0
01 Feb 2022
Stochastic Neural Networks with Infinite Width are Deterministic
Stochastic Neural Networks with Infinite Width are Deterministic
Liu Ziyin
Hanlin Zhang
Xiangming Meng
Yuting Lu
Eric P. Xing
Masakuni Ueda
105
3
0
30 Jan 2022
How does unlabeled data improve generalization in self-training? A
  one-hidden-layer theoretical analysis
How does unlabeled data improve generalization in self-training? A one-hidden-layer theoretical analysis
Shuai Zhang
Ming Wang
Sijia Liu
Pin-Yu Chen
Jinjun Xiong
SSLMLT
124
23
0
21 Jan 2022
Kernel Methods and Multi-layer Perceptrons Learn Linear Models in High
  Dimensions
Kernel Methods and Multi-layer Perceptrons Learn Linear Models in High Dimensions
Mojtaba Sahraee-Ardakan
M. Emami
Parthe Pandit
S. Rangan
A. Fletcher
109
9
0
20 Jan 2022
Adaptive Transfer Learning for Plant Phenotyping
Adaptive Transfer Learning for Plant Phenotyping
Jun Wu
Elizabeth Ainsworth
Sheng Wang
K. Guan
Jingrui He
13
2
0
14 Jan 2022
On neural network kernels and the storage capacity problem
On neural network kernels and the storage capacity problem
Jacob A. Zavatone-Veth
Cengiz Pehlevan
63
6
0
12 Jan 2022
Complexity from Adaptive-Symmetries Breaking: Global Minima in the
  Statistical Mechanics of Deep Neural Networks
Complexity from Adaptive-Symmetries Breaking: Global Minima in the Statistical Mechanics of Deep Neural Networks
Shaun Li
AI4CE
84
0
0
03 Jan 2022
Separation of Scales and a Thermodynamic Description of Feature Learning
  in Some CNNs
Separation of Scales and a Thermodynamic Description of Feature Learning in Some CNNs
Inbar Seroussi
Gadi Naveh
Zohar Ringel
113
55
0
31 Dec 2021
NN2Poly: A polynomial representation for deep feed-forward artificial
  neural networks
NN2Poly: A polynomial representation for deep feed-forward artificial neural networks
Pablo Morala
Jenny Alexandra Cifuentes
R. Lillo
Iñaki Ucar
113
7
0
21 Dec 2021
Revisiting Memory Efficient Kernel Approximation: An Indefinite Learning
  Perspective
Revisiting Memory Efficient Kernel Approximation: An Indefinite Learning Perspective
Simon Heilig
Maximilian Münch
Frank-Michael Schleif
45
1
0
18 Dec 2021
Posterior contraction rates for constrained deep Gaussian processes in
  density estimation and classication
Posterior contraction rates for constrained deep Gaussian processes in density estimation and classication
François Bachoc
A. Lagnoux
89
4
0
14 Dec 2021
Eigenspace Restructuring: a Principle of Space and Frequency in Neural
  Networks
Eigenspace Restructuring: a Principle of Space and Frequency in Neural Networks
Lechao Xiao
118
22
0
10 Dec 2021
Faster Single-loop Algorithms for Minimax Optimization without Strong
  Concavity
Faster Single-loop Algorithms for Minimax Optimization without Strong Concavity
Junchi Yang
Antonio Orvieto
Aurelien Lucchi
Niao He
117
64
0
10 Dec 2021
Provable Continual Learning via Sketched Jacobian Approximations
Provable Continual Learning via Sketched Jacobian Approximations
Reinhard Heckel
CLL
87
10
0
09 Dec 2021
Infinite Neural Network Quantum States: Entanglement and Training
  Dynamics
Infinite Neural Network Quantum States: Entanglement and Training Dynamics
Di Luo
James Halverson
79
6
0
01 Dec 2021
Dependence between Bayesian neural network units
Dependence between Bayesian neural network units
M. Vladimirova
Julyan Arbel
Stéphane Girard
BDL
110
3
0
29 Nov 2021
Impact of classification difficulty on the weight matrices spectra in
  Deep Learning and application to early-stopping
Impact of classification difficulty on the weight matrices spectra in Deep Learning and application to early-stopping
Xuran Meng
Jianfeng Yao
98
7
0
26 Nov 2021
Critical Initialization of Wide and Deep Neural Networks through Partial
  Jacobians: General Theory and Applications
Critical Initialization of Wide and Deep Neural Networks through Partial Jacobians: General Theory and Applications
Darshil Doshi
Tianyu He
Andrey Gromov
95
10
0
23 Nov 2021
Depth induces scale-averaging in overparameterized linear Bayesian
  neural networks
Depth induces scale-averaging in overparameterized linear Bayesian neural networks
Jacob A. Zavatone-Veth
Cengiz Pehlevan
BDLUQCVMDE
106
11
0
23 Nov 2021
On the Equivalence between Neural Network and Support Vector Machine
On the Equivalence between Neural Network and Support Vector Machine
Yilan Chen
Wei Huang
Lam M. Nguyen
Tsui-Wei Weng
AAML
93
18
0
11 Nov 2021
Previous
123...678...121314
Next