ResearchTrend.AI
  • Papers
  • Communities
  • Organizations
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1711.00165
  4. Cited By
Deep Neural Networks as Gaussian Processes
v1v2v3 (latest)

Deep Neural Networks as Gaussian Processes

1 November 2017
Jaehoon Lee
Yasaman Bahri
Roman Novak
S. Schoenholz
Jeffrey Pennington
Jascha Narain Sohl-Dickstein
    UQCVBDL
ArXiv (abs)PDFHTML

Papers citing "Deep Neural Networks as Gaussian Processes"

50 / 696 papers shown
Title
Adversarial Robustness Guarantees for Random Deep Neural Networks
Adversarial Robustness Guarantees for Random Deep Neural Networks
Giacomo De Palma
B. Kiani
S. Lloyd
AAMLOOD
64
8
0
13 Apr 2020
On the Neural Tangent Kernel of Deep Networks with Orthogonal
  Initialization
On the Neural Tangent Kernel of Deep Networks with Orthogonal Initialization
Wei Huang
Weitao Du
R. Xu
85
38
0
13 Apr 2020
Reinforcement Learning via Gaussian Processes with Neural Network Dual
  Kernels
Reinforcement Learning via Gaussian Processes with Neural Network Dual Kernels
I. Goumiri
Benjamin W. Priest
M. Schneider
GPBDL
37
7
0
10 Apr 2020
Predicting the outputs of finite deep neural networks trained with noisy
  gradients
Predicting the outputs of finite deep neural networks trained with noisy gradients
Gadi Naveh
Oded Ben-David
H. Sompolinsky
Zohar Ringel
121
23
0
02 Apr 2020
On Infinite-Width Hypernetworks
On Infinite-Width Hypernetworks
Etai Littwin
Tomer Galanti
Lior Wolf
Greg Yang
117
11
0
27 Mar 2020
B-PINNs: Bayesian Physics-Informed Neural Networks for Forward and
  Inverse PDE Problems with Noisy Data
B-PINNs: Bayesian Physics-Informed Neural Networks for Forward and Inverse PDE Problems with Noisy Data
Liu Yang
Xuhui Meng
George Karniadakis
PINN
260
797
0
13 Mar 2020
FedLoc: Federated Learning Framework for Data-Driven Cooperative
  Localization and Location Data Processing
FedLoc: Federated Learning Framework for Data-Driven Cooperative Localization and Location Data Processing
Feng Yin
Zhidi Lin
Yue Xu
Qinglei Kong
Deshi Li
Sergios Theodoridis
Shuguang Cui
Cui
FedML
156
4
0
08 Mar 2020
Neural Kernels Without Tangents
Neural Kernels Without Tangents
Vaishaal Shankar
Alex Fang
Wenshuo Guo
Sara Fridovich-Keil
Ludwig Schmidt
Jonathan Ragan-Kelley
Benjamin Recht
81
91
0
04 Mar 2020
The large learning rate phase of deep learning: the catapult mechanism
The large learning rate phase of deep learning: the catapult mechanism
Aitor Lewkowycz
Yasaman Bahri
Ethan Dyer
Jascha Narain Sohl-Dickstein
Guy Gur-Ari
ODL
244
241
0
04 Mar 2020
Double Trouble in Double Descent : Bias and Variance(s) in the Lazy
  Regime
Double Trouble in Double Descent : Bias and Variance(s) in the Lazy Regime
Stéphane dÁscoli
Maria Refinetti
Giulio Biroli
Florent Krzakala
211
153
0
02 Mar 2020
Stable behaviour of infinitely wide deep neural networks
Stable behaviour of infinitely wide deep neural networks
Stefano Favaro
S. Fortini
Stefano Peluchetti
BDL
83
28
0
01 Mar 2020
Convolutional Spectral Kernel Learning
Convolutional Spectral Kernel Learning
Jian Li
Yong Liu
Weiping Wang
BDL
48
5
0
28 Feb 2020
Infinitely Wide Graph Convolutional Networks: Semi-supervised Learning
  via Gaussian Processes
Infinitely Wide Graph Convolutional Networks: Semi-supervised Learning via Gaussian Processes
Jilin Hu
Jianbing Shen
B. Yang
Ling Shao
BDLGNN
109
18
0
26 Feb 2020
Convex Geometry and Duality of Over-parameterized Neural Networks
Convex Geometry and Duality of Over-parameterized Neural Networks
Tolga Ergen
Mert Pilanci
MLT
138
56
0
25 Feb 2020
Avoiding Kernel Fixed Points: Computing with ELU and GELU Infinite
  Networks
Avoiding Kernel Fixed Points: Computing with ELU and GELU Infinite Networks
Russell Tsuchida
Tim Pearce
Christopher van der Heide
Fred Roosta
M. Gallagher
75
8
0
20 Feb 2020
Robust Pruning at Initialization
Robust Pruning at Initialization
Soufiane Hayou
Jean-François Ton
Arnaud Doucet
Yee Whye Teh
43
47
0
19 Feb 2020
Why Do Deep Residual Networks Generalize Better than Deep Feedforward
  Networks? -- A Neural Tangent Kernel Perspective
Why Do Deep Residual Networks Generalize Better than Deep Feedforward Networks? -- A Neural Tangent Kernel Perspective
Kaixuan Huang
Yuqing Wang
Molei Tao
T. Zhao
MLT
62
98
0
14 Feb 2020
On Layer Normalization in the Transformer Architecture
On Layer Normalization in the Transformer Architecture
Ruibin Xiong
Yunchang Yang
Di He
Kai Zheng
Shuxin Zheng
Chen Xing
Huishuai Zhang
Yanyan Lan
Liwei Wang
Tie-Yan Liu
AI4CE
160
1,006
0
12 Feb 2020
Taylorized Training: Towards Better Approximation of Neural Network
  Training at Finite Width
Taylorized Training: Towards Better Approximation of Neural Network Training at Finite Width
Yu Bai
Ben Krause
Huan Wang
Caiming Xiong
R. Socher
87
22
0
10 Feb 2020
Quasi-Equivalence of Width and Depth of Neural Networks
Quasi-Equivalence of Width and Depth of Neural Networks
Fenglei Fan
Rongjie Lai
Ge Wang
111
11
0
06 Feb 2020
Function approximation by neural nets in the mean-field regime: Entropic
  regularization and controlled McKean-Vlasov dynamics
Function approximation by neural nets in the mean-field regime: Entropic regularization and controlled McKean-Vlasov dynamics
Belinda Tzen
Maxim Raginsky
85
17
0
05 Feb 2020
Gating creates slow modes and controls phase-space complexity in GRUs
  and LSTMs
Gating creates slow modes and controls phase-space complexity in GRUs and LSTMs
T. Can
K. Krishnamurthy
D. Schwab
AI4CE
124
18
0
31 Jan 2020
On Random Kernels of Residual Architectures
On Random Kernels of Residual Architectures
Etai Littwin
Tomer Galanti
Lior Wolf
68
4
0
28 Jan 2020
On the infinite width limit of neural networks with a standard
  parameterization
On the infinite width limit of neural networks with a standard parameterization
Jascha Narain Sohl-Dickstein
Roman Novak
S. Schoenholz
Jaehoon Lee
94
47
0
21 Jan 2020
Disentangling Trainability and Generalization in Deep Neural Networks
Disentangling Trainability and Generalization in Deep Neural Networks
Lechao Xiao
Jeffrey Pennington
S. Schoenholz
90
34
0
30 Dec 2019
Discriminative Clustering with Representation Learning with any Ratio of
  Labeled to Unlabeled Data
Discriminative Clustering with Representation Learning with any Ratio of Labeled to Unlabeled Data
Corinne Jones
Vincent Roulet
Zaïd Harchaoui
121
1
0
30 Dec 2019
Mean field theory for deep dropout networks: digging up gradient
  backpropagation deeply
Mean field theory for deep dropout networks: digging up gradient backpropagation deeply
Wei Huang
R. Xu
Weitao Du
Yutian Zeng
Yunce Zhao
68
6
0
19 Dec 2019
Analytic expressions for the output evolution of a deep neural network
Analytic expressions for the output evolution of a deep neural network
Anastasia Borovykh
43
0
0
18 Dec 2019
On the Bias-Variance Tradeoff: Textbooks Need an Update
On the Bias-Variance Tradeoff: Textbooks Need an Update
Brady Neal
43
18
0
17 Dec 2019
On the relationship between multitask neural networks and multitask
  Gaussian Processes
On the relationship between multitask neural networks and multitask Gaussian Processes
Karthikeyan K
S. Bharti
Piyush Rai
BDL
18
0
0
12 Dec 2019
Location Trace Privacy Under Conditional Priors
Location Trace Privacy Under Conditional Priors
Casey Meehan
Kamalika Chaudhuri
63
8
0
09 Dec 2019
Neural Tangents: Fast and Easy Infinite Neural Networks in Python
Neural Tangents: Fast and Easy Infinite Neural Networks in Python
Roman Novak
Lechao Xiao
Jiri Hron
Jaehoon Lee
Alexander A. Alemi
Jascha Narain Sohl-Dickstein
S. Schoenholz
108
231
0
05 Dec 2019
Implicit Priors for Knowledge Sharing in Bayesian Neural Networks
Implicit Priors for Knowledge Sharing in Bayesian Neural Networks
Jack K. Fitzsimons
Sebastian M. Schmon
Stephen J. Roberts
BDLFedML
32
0
0
02 Dec 2019
On the Heavy-Tailed Theory of Stochastic Gradient Descent for Deep
  Neural Networks
On the Heavy-Tailed Theory of Stochastic Gradient Descent for Deep Neural Networks
Umut Simsekli
Mert Gurbuzbalaban
T. H. Nguyen
G. Richard
Levent Sagun
117
59
0
29 Nov 2019
Richer priors for infinitely wide multi-layer perceptrons
Richer priors for infinitely wide multi-layer perceptrons
Russell Tsuchida
Fred Roosta
M. Gallagher
62
11
0
29 Nov 2019
Convex Formulation of Overparameterized Deep Neural Networks
Convex Formulation of Overparameterized Deep Neural Networks
Cong Fang
Yihong Gu
Weizhong Zhang
Tong Zhang
92
28
0
18 Nov 2019
Enhanced Convolutional Neural Tangent Kernels
Enhanced Convolutional Neural Tangent Kernels
Zhiyuan Li
Ruosong Wang
Dingli Yu
S. Du
Wei Hu
Ruslan Salakhutdinov
Sanjeev Arora
110
133
0
03 Nov 2019
Tensor Programs I: Wide Feedforward or Recurrent Neural Networks of Any
  Architecture are Gaussian Processes
Tensor Programs I: Wide Feedforward or Recurrent Neural Networks of Any Architecture are Gaussian Processes
Greg Yang
217
202
0
28 Oct 2019
Explicitly Bayesian Regularizations in Deep Learning
Explicitly Bayesian Regularizations in Deep Learning
Xinjie Lan
Kenneth Barner
UQCVBDLAI4CE
112
1
0
22 Oct 2019
Aleatoric and Epistemic Uncertainty in Machine Learning: An Introduction
  to Concepts and Methods
Aleatoric and Epistemic Uncertainty in Machine Learning: An Introduction to Concepts and Methods
Eyke Hüllermeier
Willem Waegeman
PERUD
274
1,445
0
21 Oct 2019
Why bigger is not always better: on finite and infinite neural networks
Why bigger is not always better: on finite and infinite neural networks
Laurence Aitchison
251
54
0
17 Oct 2019
Pathological spectra of the Fisher information metric and its variants
  in deep neural networks
Pathological spectra of the Fisher information metric and its variants in deep neural networks
Ryo Karakida
S. Akaho
S. Amari
84
28
0
14 Oct 2019
Large Deviation Analysis of Function Sensitivity in Random Deep Neural
  Networks
Large Deviation Analysis of Function Sensitivity in Random Deep Neural Networks
Bo Li
D. Saad
74
12
0
13 Oct 2019
On the expected behaviour of noise regularised deep neural networks as
  Gaussian processes
On the expected behaviour of noise regularised deep neural networks as Gaussian processes
Arnu Pretorius
Herman Kamper
Steve Kroon
66
9
0
12 Oct 2019
The Expressivity and Training of Deep Neural Networks: toward the Edge
  of Chaos?
The Expressivity and Training of Deep Neural Networks: toward the Edge of Chaos?
Gege Zhang
Gang-cheng Li
Ningwei Shen
Weidong Zhang
85
6
0
11 Oct 2019
Harnessing the Power of Infinitely Wide Deep Nets on Small-data Tasks
Harnessing the Power of Infinitely Wide Deep Nets on Small-data Tasks
Sanjeev Arora
S. Du
Zhiyuan Li
Ruslan Salakhutdinov
Ruosong Wang
Dingli Yu
AAML
89
162
0
03 Oct 2019
Beyond Linearization: On Quadratic and Higher-Order Approximation of
  Wide Neural Networks
Beyond Linearization: On Quadratic and Higher-Order Approximation of Wide Neural Networks
Yu Bai
Jason D. Lee
85
116
0
03 Oct 2019
Truth or Backpropaganda? An Empirical Investigation of Deep Learning
  Theory
Truth or Backpropaganda? An Empirical Investigation of Deep Learning Theory
Micah Goldblum
Jonas Geiping
Avi Schwarzschild
Michael Moeller
Tom Goldstein
135
34
0
01 Oct 2019
The asymptotic spectrum of the Hessian of DNN throughout training
The asymptotic spectrum of the Hessian of DNN throughout training
Arthur Jacot
Franck Gabriel
Clément Hongler
145
35
0
01 Oct 2019
Non-Gaussian processes and neural networks at finite widths
Non-Gaussian processes and neural networks at finite widths
Sho Yaida
117
88
0
30 Sep 2019
Previous
123...11121314
Next