Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2205.09860
Cited By
Mean-Field Analysis of Two-Layer Neural Networks: Global Optimality with Linear Convergence Rates
19 May 2022
Jingwei Zhang
Xunpeng Huang
Jincheng Yu
MLT
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Mean-Field Analysis of Two-Layer Neural Networks: Global Optimality with Linear Convergence Rates"
11 / 11 papers shown
Title
Convex Analysis of the Mean Field Langevin Dynamics
Atsushi Nitanda
Denny Wu
Taiji Suzuki
MLT
84
65
0
25 Jan 2022
Modeling from Features: a Mean-field Framework for Over-parameterized Deep Neural Networks
Cong Fang
Jason D. Lee
Pengkun Yang
Tong Zhang
OOD
FedML
130
57
0
03 Jul 2020
SVGD as a kernelized Wasserstein gradient flow of the chi-squared divergence
Sinho Chewi
Thibaut Le Gouic
Chen Lu
Tyler Maunu
Philippe Rigollet
85
70
0
03 Jun 2020
Feature Purification: How Adversarial Training Performs Robust Deep Learning
Zeyuan Allen-Zhu
Yuanzhi Li
MLT
AAML
63
150
0
20 May 2020
Learning and Generalization in Overparameterized Neural Networks, Going Beyond Two Layers
Zeyuan Allen-Zhu
Yuanzhi Li
Yingyu Liang
MLT
183
769
0
12 Nov 2018
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
S. Du
Xiyu Zhai
Barnabás Póczós
Aarti Singh
MLT
ODL
214
1,272
0
04 Oct 2018
Learning Overparameterized Neural Networks via Stochastic Gradient Descent on Structured Data
Yuanzhi Li
Yingyu Liang
MLT
216
653
0
03 Aug 2018
Deep Neural Networks as Gaussian Processes
Jaehoon Lee
Yasaman Bahri
Roman Novak
S. Schoenholz
Jeffrey Pennington
Jascha Narain Sohl-Dickstein
UQCV
BDL
115
1,093
0
01 Nov 2017
Stein Variational Gradient Descent as Gradient Flow
Qiang Liu
OT
72
275
0
25 Apr 2017
Non-convex learning via Stochastic Gradient Langevin Dynamics: a nonasymptotic analysis
Maxim Raginsky
Alexander Rakhlin
Matus Telgarsky
70
521
0
13 Feb 2017
Stein Variational Gradient Descent: A General Purpose Bayesian Inference Algorithm
Qiang Liu
Dilin Wang
BDL
65
1,092
0
16 Aug 2016
1