Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1805.09545
Cited By
On the Global Convergence of Gradient Descent for Over-parameterized Models using Optimal Transport
24 May 2018
Lénaïc Chizat
Francis R. Bach
OT
Re-assign community
ArXiv
PDF
HTML
Papers citing
"On the Global Convergence of Gradient Descent for Over-parameterized Models using Optimal Transport"
50 / 483 papers shown
Title
Analyzing Upper Bounds on Mean Absolute Errors for Deep Neural Network Based Vector-to-Vector Regression
Jun Qi
Jun Du
Sabato Marco Siniscalchi
Xiaoli Ma
Chin-Hui Lee
16
41
0
04 Aug 2020
Low-loss connection of weight vectors: distribution-based approaches
Ivan Anokhin
Dmitry Yarotsky
3DV
17
4
0
03 Aug 2020
Ergodicity of the underdamped mean-field Langevin dynamics
A. Kazeykina
Zhenjie Ren
Xiaolu Tan
Junjian Yang
6
16
0
29 Jul 2020
Understanding Implicit Regularization in Over-Parameterized Single Index Model
Jianqing Fan
Zhuoran Yang
Mengxin Yu
24
16
0
16 Jul 2020
Phase diagram for two-layer ReLU neural networks at infinite-width limit
Tao Luo
Zhi-Qin John Xu
Zheng Ma
Yaoyu Zhang
14
58
0
15 Jul 2020
Supervised learning from noisy observations: Combining machine-learning techniques with data assimilation
Georg Gottwald
Sebastian Reich
AI4CE
8
60
0
14 Jul 2020
Global Convergence of Second-order Dynamics in Two-layer Neural Networks
Walid Krichene
Kenneth F. Caluya
A. Halder
MLT
6
5
0
14 Jul 2020
Quantitative Propagation of Chaos for SGD in Wide Neural Networks
Valentin De Bortoli
Alain Durmus
Xavier Fontaine
Umut Simsekli
27
25
0
13 Jul 2020
Learning Over-Parametrized Two-Layer ReLU Neural Networks beyond NTK
Yuanzhi Li
Tengyu Ma
Hongyang R. Zhang
MLT
20
28
0
09 Jul 2020
Towards an Understanding of Residual Networks Using Neural Tangent Hierarchy (NTH)
Yuqing Li
Tao Luo
N. Yip
6
5
0
07 Jul 2020
Ridge Regression with Over-Parametrized Two-Layer Networks Converge to Ridgelet Spectrum
Sho Sonoda
Isao Ishikawa
Masahiro Ikeda
MLT
6
0
0
07 Jul 2020
Modeling from Features: a Mean-field Framework for Over-parameterized Deep Neural Networks
Cong Fang
J. Lee
Pengkun Yang
Tong Zhang
OOD
FedML
7
57
0
03 Jul 2020
The Gaussian equivalence of generative models for learning with shallow neural networks
Sebastian Goldt
Bruno Loureiro
Galen Reeves
Florent Krzakala
M. Mézard
Lenka Zdeborová
BDL
41
100
0
25 Jun 2020
The Quenching-Activation Behavior of the Gradient Descent Dynamics for Two-layer Neural Network Models
Chao Ma
Lei Wu
E. Weinan
MLT
23
10
0
25 Jun 2020
On the Empirical Neural Tangent Kernel of Standard Finite-Width Convolutional Neural Network Architectures
M. Samarin
Volker Roth
David Belius
18
3
0
24 Jun 2020
When Do Neural Networks Outperform Kernel Methods?
Behrooz Ghorbani
Song Mei
Theodor Misiakiewicz
Andrea Montanari
21
184
0
24 Jun 2020
Optimal Rates for Averaged Stochastic Gradient Descent under Neural Tangent Kernel Regime
Atsushi Nitanda
Taiji Suzuki
6
41
0
22 Jun 2020
On Sparsity in Overparametrised Shallow ReLU Networks
Jaume de Dios
Joan Bruna
16
14
0
18 Jun 2020
A Note on the Global Convergence of Multilayer Neural Networks in the Mean Field Regime
H. Pham
Phan-Minh Nguyen
MLT
AI4CE
9
4
0
16 Jun 2020
Hessian-Free High-Resolution Nesterov Acceleration for Sampling
Ruilin Li
H. Zha
Molei Tao
20
7
0
16 Jun 2020
Non-convergence of stochastic gradient descent in the training of deep neural networks
Patrick Cheridito
Arnulf Jentzen
Florian Rossmannek
14
37
0
12 Jun 2020
Directional convergence and alignment in deep learning
Ziwei Ji
Matus Telgarsky
12
162
0
11 Jun 2020
Dynamically Stable Infinite-Width Limits of Neural Classifiers
Eugene Golikov
8
8
0
11 Jun 2020
Dynamical mean-field theory for stochastic gradient descent in Gaussian mixture classification
Francesca Mignacco
Florent Krzakala
Pierfrancesco Urbani
Lenka Zdeborová
MLT
9
66
0
10 Jun 2020
Representation formulas and pointwise properties for Barron functions
E. Weinan
Stephan Wojtowytsch
23
79
0
10 Jun 2020
The Hidden Convex Optimization Landscape of Two-Layer ReLU Neural Networks: an Exact Characterization of the Optimal Solutions
Yifei Wang
Jonathan Lacotte
Mert Pilanci
MLT
11
26
0
10 Jun 2020
Can Temporal-Difference and Q-Learning Learn Representation? A Mean-Field Theory
Yufeng Zhang
Qi Cai
Zhuoran Yang
Yongxin Chen
Zhaoran Wang
OOD
MLT
84
11
0
08 Jun 2020
Structure preserving deep learning
E. Celledoni
Matthias Joachim Ehrhardt
Christian Etmann
R. McLachlan
B. Owren
Carola-Bibiane Schönlieb
Ferdia Sherry
AI4CE
15
44
0
05 Jun 2020
Network size and weights size for memorization with two-layers neural networks
Sébastien Bubeck
Ronen Eldan
Y. Lee
Dan Mikulincer
34
33
0
04 Jun 2020
A mathematical model for automatic differentiation in machine learning
Jérôme Bolte
Edouard Pauwels
15
67
0
03 Jun 2020
On the Convergence of Gradient Descent Training for Two-layer ReLU-networks in the Mean Field Regime
Stephan Wojtowytsch
MLT
24
49
0
27 May 2020
Can Shallow Neural Networks Beat the Curse of Dimensionality? A mean field training perspective
Stephan Wojtowytsch
E. Weinan
MLT
26
48
0
21 May 2020
Provable Training of a ReLU Gate with an Iterative Non-Gradient Algorithm
Sayar Karmakar
Anirbit Mukherjee
6
7
0
08 May 2020
Optimization in Machine Learning: A Distribution Space Approach
Yongqiang Cai
Qianxiao Li
Zuowei Shen
17
1
0
18 Apr 2020
Mehler's Formula, Branching Process, and Compositional Kernels of Deep Neural Networks
Tengyuan Liang
Hai Tran-Bach
12
11
0
09 Apr 2020
Mirror Descent Algorithms for Minimizing Interacting Free Energy
Lexing Ying
11
8
0
08 Apr 2020
Piecewise linear activations substantially shape the loss surfaces of neural networks
Fengxiang He
Bohan Wang
Dacheng Tao
ODL
28
28
0
27 Mar 2020
Symmetry & critical points for a model shallow neural network
Yossi Arjevani
M. Field
34
13
0
23 Mar 2020
Towards a General Theory of Infinite-Width Limits of Neural Classifiers
Eugene Golikov
AI4CE
31
9
0
12 Mar 2020
A Mean-field Analysis of Deep ResNet and Beyond: Towards Provable Optimization Via Overparameterization From Depth
Yiping Lu
Chao Ma
Yulong Lu
Jianfeng Lu
Lexing Ying
MLT
39
78
0
11 Mar 2020
A mean-field analysis of two-player zero-sum games
Carles Domingo-Enrich
Samy Jelassi
A. Mensch
Grant M. Rotskoff
Joan Bruna
MLT
32
40
0
14 Feb 2020
Implicit Bias of Gradient Descent for Wide Two-layer Neural Networks Trained with the Logistic Loss
Lénaïc Chizat
Francis R. Bach
MLT
18
327
0
11 Feb 2020
A Generalized Neural Tangent Kernel Analysis for Two-layer Neural Networks
Zixiang Chen
Yuan Cao
Quanquan Gu
Tong Zhang
MLT
27
10
0
10 Feb 2020
Taylorized Training: Towards Better Approximation of Neural Network Training at Finite Width
Yu Bai
Ben Krause
Huan Wang
Caiming Xiong
R. Socher
6
22
0
10 Feb 2020
Global Convergence of Frank Wolfe on One Hidden Layer Networks
Alexandre d’Aspremont
Mert Pilanci
14
4
0
06 Feb 2020
Function approximation by neural nets in the mean-field regime: Entropic regularization and controlled McKean-Vlasov dynamics
Belinda Tzen
Maxim Raginsky
10
17
0
05 Feb 2020
A Deep Conditioning Treatment of Neural Networks
Naman Agarwal
Pranjal Awasthi
Satyen Kale
AI4CE
25
15
0
04 Feb 2020
A Rigorous Framework for the Mean Field Limit of Multilayer Neural Networks
Phan-Minh Nguyen
H. Pham
AI4CE
21
81
0
30 Jan 2020
On the infinite width limit of neural networks with a standard parameterization
Jascha Narain Sohl-Dickstein
Roman Novak
S. Schoenholz
Jaehoon Lee
24
47
0
21 Jan 2020
Revisiting Landscape Analysis in Deep Neural Networks: Eliminating Decreasing Paths to Infinity
Shiyu Liang
Ruoyu Sun
R. Srikant
35
19
0
31 Dec 2019
Previous
1
2
3
...
10
7
8
9
Next