ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1711.00165
  4. Cited By
Deep Neural Networks as Gaussian Processes

Deep Neural Networks as Gaussian Processes

1 November 2017
Jaehoon Lee
Yasaman Bahri
Roman Novak
S. Schoenholz
Jeffrey Pennington
Jascha Narain Sohl-Dickstein
    UQCV
    BDL
ArXivPDFHTML

Papers citing "Deep Neural Networks as Gaussian Processes"

50 / 692 papers shown
Title
B-PINNs: Bayesian Physics-Informed Neural Networks for Forward and
  Inverse PDE Problems with Noisy Data
B-PINNs: Bayesian Physics-Informed Neural Networks for Forward and Inverse PDE Problems with Noisy Data
Liu Yang
Xuhui Meng
George Karniadakis
PINN
186
763
0
13 Mar 2020
FedLoc: Federated Learning Framework for Data-Driven Cooperative
  Localization and Location Data Processing
FedLoc: Federated Learning Framework for Data-Driven Cooperative Localization and Location Data Processing
Feng Yin
Zhidi Lin
Yue Xu
Qinglei Kong
Deshi Li
Sergios Theodoridis
Shuguang Cui
Cui
FedML
24
4
0
08 Mar 2020
Neural Kernels Without Tangents
Neural Kernels Without Tangents
Vaishaal Shankar
Alex Fang
Wenshuo Guo
Sara Fridovich-Keil
Ludwig Schmidt
Jonathan Ragan-Kelley
Benjamin Recht
25
90
0
04 Mar 2020
The large learning rate phase of deep learning: the catapult mechanism
The large learning rate phase of deep learning: the catapult mechanism
Aitor Lewkowycz
Yasaman Bahri
Ethan Dyer
Jascha Narain Sohl-Dickstein
Guy Gur-Ari
ODL
159
236
0
04 Mar 2020
Double Trouble in Double Descent : Bias and Variance(s) in the Lazy
  Regime
Double Trouble in Double Descent : Bias and Variance(s) in the Lazy Regime
Stéphane dÁscoli
Maria Refinetti
Giulio Biroli
Florent Krzakala
98
152
0
02 Mar 2020
Stable behaviour of infinitely wide deep neural networks
Stable behaviour of infinitely wide deep neural networks
Stefano Favaro
S. Fortini
Stefano Peluchetti
BDL
22
28
0
01 Mar 2020
Convolutional Spectral Kernel Learning
Convolutional Spectral Kernel Learning
Jian Li
Yong Liu
Weiping Wang
BDL
14
5
0
28 Feb 2020
Infinitely Wide Graph Convolutional Networks: Semi-supervised Learning
  via Gaussian Processes
Infinitely Wide Graph Convolutional Networks: Semi-supervised Learning via Gaussian Processes
Jilin Hu
Jianbing Shen
B. Yang
Ling Shao
BDL
GNN
47
17
0
26 Feb 2020
Convex Geometry and Duality of Over-parameterized Neural Networks
Convex Geometry and Duality of Over-parameterized Neural Networks
Tolga Ergen
Mert Pilanci
MLT
47
54
0
25 Feb 2020
Avoiding Kernel Fixed Points: Computing with ELU and GELU Infinite
  Networks
Avoiding Kernel Fixed Points: Computing with ELU and GELU Infinite Networks
Russell Tsuchida
Tim Pearce
Christopher van der Heide
Fred Roosta
M. Gallagher
8
8
0
20 Feb 2020
Robust Pruning at Initialization
Robust Pruning at Initialization
Soufiane Hayou
Jean-François Ton
Arnaud Doucet
Yee Whye Teh
22
46
0
19 Feb 2020
Why Do Deep Residual Networks Generalize Better than Deep Feedforward
  Networks? -- A Neural Tangent Kernel Perspective
Why Do Deep Residual Networks Generalize Better than Deep Feedforward Networks? -- A Neural Tangent Kernel Perspective
Kaixuan Huang
Yuqing Wang
Molei Tao
T. Zhao
MLT
22
97
0
14 Feb 2020
On Layer Normalization in the Transformer Architecture
On Layer Normalization in the Transformer Architecture
Ruibin Xiong
Yunchang Yang
Di He
Kai Zheng
Shuxin Zheng
Chen Xing
Huishuai Zhang
Yanyan Lan
Liwei Wang
Tie-Yan Liu
AI4CE
61
953
0
12 Feb 2020
Taylorized Training: Towards Better Approximation of Neural Network
  Training at Finite Width
Taylorized Training: Towards Better Approximation of Neural Network Training at Finite Width
Yu Bai
Ben Krause
Huan Wang
Caiming Xiong
R. Socher
22
22
0
10 Feb 2020
Quasi-Equivalence of Width and Depth of Neural Networks
Quasi-Equivalence of Width and Depth of Neural Networks
Fenglei Fan
Rongjie Lai
Ge Wang
22
11
0
06 Feb 2020
Function approximation by neural nets in the mean-field regime: Entropic
  regularization and controlled McKean-Vlasov dynamics
Function approximation by neural nets in the mean-field regime: Entropic regularization and controlled McKean-Vlasov dynamics
Belinda Tzen
Maxim Raginsky
18
17
0
05 Feb 2020
Gating creates slow modes and controls phase-space complexity in GRUs
  and LSTMs
Gating creates slow modes and controls phase-space complexity in GRUs and LSTMs
T. Can
K. Krishnamurthy
D. Schwab
AI4CE
19
17
0
31 Jan 2020
On Random Kernels of Residual Architectures
On Random Kernels of Residual Architectures
Etai Littwin
Tomer Galanti
Lior Wolf
22
4
0
28 Jan 2020
On the infinite width limit of neural networks with a standard
  parameterization
On the infinite width limit of neural networks with a standard parameterization
Jascha Narain Sohl-Dickstein
Roman Novak
S. Schoenholz
Jaehoon Lee
32
47
0
21 Jan 2020
Disentangling Trainability and Generalization in Deep Neural Networks
Disentangling Trainability and Generalization in Deep Neural Networks
Lechao Xiao
Jeffrey Pennington
S. Schoenholz
14
34
0
30 Dec 2019
Discriminative Clustering with Representation Learning with any Ratio of
  Labeled to Unlabeled Data
Discriminative Clustering with Representation Learning with any Ratio of Labeled to Unlabeled Data
Corinne Jones
Vincent Roulet
Zaïd Harchaoui
43
1
0
30 Dec 2019
Mean field theory for deep dropout networks: digging up gradient
  backpropagation deeply
Mean field theory for deep dropout networks: digging up gradient backpropagation deeply
Wei Huang
R. Xu
Weitao Du
Yutian Zeng
Yunce Zhao
32
6
0
19 Dec 2019
Analytic expressions for the output evolution of a deep neural network
Analytic expressions for the output evolution of a deep neural network
Anastasia Borovykh
22
0
0
18 Dec 2019
On the Bias-Variance Tradeoff: Textbooks Need an Update
On the Bias-Variance Tradeoff: Textbooks Need an Update
Brady Neal
26
18
0
17 Dec 2019
On the relationship between multitask neural networks and multitask
  Gaussian Processes
On the relationship between multitask neural networks and multitask Gaussian Processes
Karthikeyan K
S. Bharti
Piyush Rai
BDL
16
0
0
12 Dec 2019
Location Trace Privacy Under Conditional Priors
Location Trace Privacy Under Conditional Priors
Casey Meehan
Kamalika Chaudhuri
26
8
0
09 Dec 2019
Neural Tangents: Fast and Easy Infinite Neural Networks in Python
Neural Tangents: Fast and Easy Infinite Neural Networks in Python
Roman Novak
Lechao Xiao
Jiri Hron
Jaehoon Lee
Alexander A. Alemi
Jascha Narain Sohl-Dickstein
S. Schoenholz
38
225
0
05 Dec 2019
Implicit Priors for Knowledge Sharing in Bayesian Neural Networks
Implicit Priors for Knowledge Sharing in Bayesian Neural Networks
Jack K. Fitzsimons
Sebastian M. Schmon
Stephen J. Roberts
BDL
FedML
26
0
0
02 Dec 2019
On the Heavy-Tailed Theory of Stochastic Gradient Descent for Deep
  Neural Networks
On the Heavy-Tailed Theory of Stochastic Gradient Descent for Deep Neural Networks
Umut Simsekli
Mert Gurbuzbalaban
T. H. Nguyen
G. Richard
Levent Sagun
29
55
0
29 Nov 2019
Richer priors for infinitely wide multi-layer perceptrons
Richer priors for infinitely wide multi-layer perceptrons
Russell Tsuchida
Fred Roosta
M. Gallagher
17
10
0
29 Nov 2019
Convex Formulation of Overparameterized Deep Neural Networks
Convex Formulation of Overparameterized Deep Neural Networks
Cong Fang
Yihong Gu
Weizhong Zhang
Tong Zhang
37
28
0
18 Nov 2019
Enhanced Convolutional Neural Tangent Kernels
Enhanced Convolutional Neural Tangent Kernels
Zhiyuan Li
Ruosong Wang
Dingli Yu
S. Du
Wei Hu
Ruslan Salakhutdinov
Sanjeev Arora
24
131
0
03 Nov 2019
Tensor Programs I: Wide Feedforward or Recurrent Neural Networks of Any
  Architecture are Gaussian Processes
Tensor Programs I: Wide Feedforward or Recurrent Neural Networks of Any Architecture are Gaussian Processes
Greg Yang
33
194
0
28 Oct 2019
Explicitly Bayesian Regularizations in Deep Learning
Explicitly Bayesian Regularizations in Deep Learning
Xinjie Lan
Kenneth Barner
UQCV
BDL
AI4CE
22
1
0
22 Oct 2019
Aleatoric and Epistemic Uncertainty in Machine Learning: An Introduction
  to Concepts and Methods
Aleatoric and Epistemic Uncertainty in Machine Learning: An Introduction to Concepts and Methods
Eyke Hüllermeier
Willem Waegeman
PER
UD
89
1,359
0
21 Oct 2019
Why bigger is not always better: on finite and infinite neural networks
Why bigger is not always better: on finite and infinite neural networks
Laurence Aitchison
175
51
0
17 Oct 2019
Pathological spectra of the Fisher information metric and its variants
  in deep neural networks
Pathological spectra of the Fisher information metric and its variants in deep neural networks
Ryo Karakida
S. Akaho
S. Amari
33
28
0
14 Oct 2019
Large Deviation Analysis of Function Sensitivity in Random Deep Neural
  Networks
Large Deviation Analysis of Function Sensitivity in Random Deep Neural Networks
Bo Li
D. Saad
27
12
0
13 Oct 2019
On the expected behaviour of noise regularised deep neural networks as
  Gaussian processes
On the expected behaviour of noise regularised deep neural networks as Gaussian processes
Arnu Pretorius
Herman Kamper
Steve Kroon
27
9
0
12 Oct 2019
The Expressivity and Training of Deep Neural Networks: toward the Edge
  of Chaos?
The Expressivity and Training of Deep Neural Networks: toward the Edge of Chaos?
Gege Zhang
Gang-cheng Li
Ningwei Shen
Weidong Zhang
27
6
0
11 Oct 2019
Harnessing the Power of Infinitely Wide Deep Nets on Small-data Tasks
Harnessing the Power of Infinitely Wide Deep Nets on Small-data Tasks
Sanjeev Arora
S. Du
Zhiyuan Li
Ruslan Salakhutdinov
Ruosong Wang
Dingli Yu
AAML
22
161
0
03 Oct 2019
Beyond Linearization: On Quadratic and Higher-Order Approximation of
  Wide Neural Networks
Beyond Linearization: On Quadratic and Higher-Order Approximation of Wide Neural Networks
Yu Bai
Jason D. Lee
24
116
0
03 Oct 2019
Truth or Backpropaganda? An Empirical Investigation of Deep Learning
  Theory
Truth or Backpropaganda? An Empirical Investigation of Deep Learning Theory
Micah Goldblum
Jonas Geiping
Avi Schwarzschild
Michael Moeller
Tom Goldstein
21
32
0
01 Oct 2019
The asymptotic spectrum of the Hessian of DNN throughout training
The asymptotic spectrum of the Hessian of DNN throughout training
Arthur Jacot
Franck Gabriel
Clément Hongler
23
34
0
01 Oct 2019
Non-Gaussian processes and neural networks at finite widths
Non-Gaussian processes and neural networks at finite widths
Sho Yaida
39
87
0
30 Sep 2019
Wider Networks Learn Better Features
Wider Networks Learn Better Features
D. Gilboa
Guy Gur-Ari
32
7
0
25 Sep 2019
Neural networks are a priori biased towards Boolean functions with low
  entropy
Neural networks are a priori biased towards Boolean functions with low entropy
Chris Mingard
Joar Skalse
Guillermo Valle Pérez
David Martínez-Rubio
Vladimir Mikulik
A. Louis
FAtt
AI4CE
30
37
0
25 Sep 2019
Asymptotics of Wide Networks from Feynman Diagrams
Asymptotics of Wide Networks from Feynman Diagrams
Ethan Dyer
Guy Gur-Ari
32
114
0
25 Sep 2019
PAC-Bayesian Bounds for Deep Gaussian Processes
PAC-Bayesian Bounds for Deep Gaussian Processes
R. Foll
Ingo Steinwart
BDL
33
1
0
22 Sep 2019
Adversarial Vulnerability Bounds for Gaussian Process Classification
Adversarial Vulnerability Bounds for Gaussian Process Classification
M. Smith
Kathrin Grosse
Michael Backes
Mauricio A. Alvarez
AAML
16
9
0
19 Sep 2019
Previous
123...11121314
Next