Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1804.06561
Cited By
A Mean Field View of the Landscape of Two-Layers Neural Networks
18 April 2018
Song Mei
Andrea Montanari
Phan-Minh Nguyen
MLT
Re-assign community
ArXiv
PDF
HTML
Papers citing
"A Mean Field View of the Landscape of Two-Layers Neural Networks"
50 / 206 papers shown
Title
Representation formulas and pointwise properties for Barron functions
E. Weinan
Stephan Wojtowytsch
23
79
0
10 Jun 2020
Machine Learning and Control Theory
A. Bensoussan
Yiqun Li
Dinh Phan Cao Nguyen
M. Tran
S. Yam
Xiang Zhou
AI4CE
26
12
0
10 Jun 2020
A Survey on Generative Adversarial Networks: Variants, Applications, and Training
Abdul Jabbar
Xi Li
Bourahla Omar
25
266
0
09 Jun 2020
Can Temporal-Difference and Q-Learning Learn Representation? A Mean-Field Theory
Yufeng Zhang
Qi Cai
Zhuoran Yang
Yongxin Chen
Zhaoran Wang
OOD
MLT
84
11
0
08 Jun 2020
Can Shallow Neural Networks Beat the Curse of Dimensionality? A mean field training perspective
Stephan Wojtowytsch
E. Weinan
MLT
26
48
0
21 May 2020
Predicting the outputs of finite deep neural networks trained with noisy gradients
Gadi Naveh
Oded Ben-David
H. Sompolinsky
Z. Ringel
11
20
0
02 Apr 2020
Symmetry & critical points for a model shallow neural network
Yossi Arjevani
M. Field
34
13
0
23 Mar 2020
A Mean-field Analysis of Deep ResNet and Beyond: Towards Provable Optimization Via Overparameterization From Depth
Yiping Lu
Chao Ma
Yulong Lu
Jianfeng Lu
Lexing Ying
MLT
39
78
0
11 Mar 2020
The large learning rate phase of deep learning: the catapult mechanism
Aitor Lewkowycz
Yasaman Bahri
Ethan Dyer
Jascha Narain Sohl-Dickstein
Guy Gur-Ari
ODL
159
234
0
04 Mar 2020
Loss landscapes and optimization in over-parameterized non-linear systems and neural networks
Chaoyue Liu
Libin Zhu
M. Belkin
ODL
4
247
0
29 Feb 2020
A Spectral Analysis of Dot-product Kernels
M. Scetbon
Zaïd Harchaoui
130
2
0
28 Feb 2020
Implicit Bias of Gradient Descent for Wide Two-layer Neural Networks Trained with the Logistic Loss
Lénaïc Chizat
Francis R. Bach
MLT
18
327
0
11 Feb 2020
Robustness of Bayesian Neural Networks to Gradient-Based Attacks
Ginevra Carbone
Matthew Wicker
Luca Laurenti
A. Patané
Luca Bortolussi
G. Sanguinetti
AAML
38
77
0
11 Feb 2020
Inference in Multi-Layer Networks with Matrix-Valued Unknowns
Parthe Pandit
Mojtaba Sahraee-Ardakan
S. Rangan
P. Schniter
A. Fletcher
23
6
0
26 Jan 2020
On the infinite width limit of neural networks with a standard parameterization
Jascha Narain Sohl-Dickstein
Roman Novak
S. Schoenholz
Jaehoon Lee
24
47
0
21 Jan 2020
Mean-Field and Kinetic Descriptions of Neural Differential Equations
Michael Herty
T. Trimborn
G. Visconti
36
6
0
07 Jan 2020
Revisiting Landscape Analysis in Deep Neural Networks: Eliminating Decreasing Paths to Infinity
Shiyu Liang
Ruoyu Sun
R. Srikant
35
19
0
31 Dec 2019
Machine Learning from a Continuous Viewpoint
E. Weinan
Chao Ma
Lei Wu
23
102
0
30 Dec 2019
Optimization for deep learning: theory and algorithms
Ruoyu Sun
ODL
16
168
0
19 Dec 2019
State Space Emulation and Annealed Sequential Monte Carlo for High Dimensional Optimization
Chencheng Cai
Rong Chen
16
0
0
17 Nov 2019
Global Convergence of Gradient Descent for Deep Linear Residual Networks
Lei Wu
Qingcan Wang
Chao Ma
ODL
AI4CE
25
22
0
02 Nov 2019
Online Stochastic Gradient Descent with Arbitrary Initialization Solves Non-smooth, Non-convex Phase Retrieval
Yan Shuo Tan
Roman Vershynin
22
35
0
28 Oct 2019
The Local Elasticity of Neural Networks
Hangfeng He
Weijie J. Su
40
44
0
15 Oct 2019
Beyond Linearization: On Quadratic and Higher-Order Approximation of Wide Neural Networks
Yu Bai
J. Lee
21
116
0
03 Oct 2019
Finite Depth and Width Corrections to the Neural Tangent Kernel
Boris Hanin
Mihai Nica
MDE
22
150
0
13 Sep 2019
The generalization error of random features regression: Precise asymptotics and double descent curve
Song Mei
Andrea Montanari
51
626
0
14 Aug 2019
Sparse Optimization on Measures with Over-parameterized Gradient Descent
Lénaïc Chizat
21
92
0
24 Jul 2019
Theory of the Frequency Principle for General Deep Neural Networks
Tao Luo
Zheng Ma
Zhi-Qin John Xu
Yaoyu Zhang
18
78
0
21 Jun 2019
Kernel and Rich Regimes in Overparametrized Models
Blake E. Woodworth
Suriya Gunasekar
Pedro H. P. Savarese
E. Moroshko
Itay Golan
J. Lee
Daniel Soudry
Nathan Srebro
19
352
0
13 Jun 2019
Maximum Mean Discrepancy Gradient Flow
Michael Arbel
Anna Korba
Adil Salim
A. Gretton
27
159
0
11 Jun 2019
Gradient Descent can Learn Less Over-parameterized Two-layer Neural Networks on Classification Problems
Atsushi Nitanda
Geoffrey Chinot
Taiji Suzuki
MLT
16
33
0
23 May 2019
An Information Theoretic Interpretation to Deep Neural Networks
Shao-Lun Huang
Xiangxiang Xu
Lizhong Zheng
G. Wornell
FAtt
22
41
0
16 May 2019
Data-dependent Sample Complexity of Deep Neural Networks via Lipschitz Augmentation
Colin Wei
Tengyu Ma
17
109
0
09 May 2019
Linearized two-layers neural networks in high dimension
Behrooz Ghorbani
Song Mei
Theodor Misiakiewicz
Andrea Montanari
MLT
18
241
0
27 Apr 2019
A Selective Overview of Deep Learning
Jianqing Fan
Cong Ma
Yiqiao Zhong
BDL
VLM
28
136
0
10 Apr 2019
Analysis of the Gradient Descent Algorithm for a Deep Neural Network Model with Skip-connections
E. Weinan
Chao Ma
Qingcan Wang
Lei Wu
MLT
27
22
0
10 Apr 2019
Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks
Mingchen Li
Mahdi Soltanolkotabi
Samet Oymak
NoLa
39
351
0
27 Mar 2019
Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks
Sanjeev Arora
S. Du
Wei Hu
Zhiyuan Li
Ruosong Wang
MLT
35
962
0
24 Jan 2019
Scaling description of generalization with number of parameters in deep learning
Mario Geiger
Arthur Jacot
S. Spigler
Franck Gabriel
Levent Sagun
Stéphane dÁscoli
Giulio Biroli
Clément Hongler
M. Wyart
46
195
0
06 Jan 2019
Analysis of a Two-Layer Neural Network via Displacement Convexity
Adel Javanmard
Marco Mondelli
Andrea Montanari
MLT
45
57
0
05 Jan 2019
Gradient Descent Finds Global Minima of Deep Neural Networks
S. Du
J. Lee
Haochuan Li
Liwei Wang
M. Tomizuka
ODL
35
1,122
0
09 Nov 2018
On the Convergence Rate of Training Recurrent Neural Networks
Zeyuan Allen-Zhu
Yuanzhi Li
Zhao-quan Song
18
191
0
29 Oct 2018
Subgradient Descent Learns Orthogonal Dictionaries
Yu Bai
Qijia Jiang
Ju Sun
12
51
0
25 Oct 2018
A Priori Estimates of the Population Risk for Two-layer Neural Networks
Weinan E
Chao Ma
Lei Wu
29
130
0
15 Oct 2018
Regularization Matters: Generalization and Optimization of Neural Nets v.s. their Induced Kernel
Colin Wei
J. Lee
Qiang Liu
Tengyu Ma
20
243
0
12 Oct 2018
Unbiased deep solvers for linear parametric PDEs
Marc Sabate Vidales
David Siska
Lukasz Szpruch
OOD
24
7
0
11 Oct 2018
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
S. Du
Xiyu Zhai
Barnabás Póczós
Aarti Singh
MLT
ODL
38
1,250
0
04 Oct 2018
Implicit Self-Regularization in Deep Neural Networks: Evidence from Random Matrix Theory and Implications for Learning
Charles H. Martin
Michael W. Mahoney
AI4CE
32
190
0
02 Oct 2018
Mean Field Analysis of Neural Networks: A Central Limit Theorem
Justin A. Sirignano
K. Spiliopoulos
MLT
24
192
0
28 Aug 2018
On Lipschitz Bounds of General Convolutional Neural Networks
Dongmian Zou
R. Balan
Maneesh Kumar Singh
16
54
0
04 Aug 2018
Previous
1
2
3
4
5
Next