ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1804.06561
  4. Cited By
A Mean Field View of the Landscape of Two-Layers Neural Networks

A Mean Field View of the Landscape of Two-Layers Neural Networks

18 April 2018
Song Mei
Andrea Montanari
Phan-Minh Nguyen
    MLT
ArXivPDFHTML

Papers citing "A Mean Field View of the Landscape of Two-Layers Neural Networks"

50 / 206 papers shown
Title
Sinkformers: Transformers with Doubly Stochastic Attention
Sinkformers: Transformers with Doubly Stochastic Attention
Michael E. Sander
Pierre Ablin
Mathieu Blondel
Gabriel Peyré
29
76
0
22 Oct 2021
The Convex Geometry of Backpropagation: Neural Network Gradient Flows
  Converge to Extreme Points of the Dual Convex Program
The Convex Geometry of Backpropagation: Neural Network Gradient Flows Converge to Extreme Points of the Dual Convex Program
Yifei Wang
Mert Pilanci
MLT
MDE
55
11
0
13 Oct 2021
Parallel Deep Neural Networks Have Zero Duality Gap
Parallel Deep Neural Networks Have Zero Duality Gap
Yifei Wang
Tolga Ergen
Mert Pilanci
79
10
0
13 Oct 2021
The Role of Permutation Invariance in Linear Mode Connectivity of Neural
  Networks
The Role of Permutation Invariance in Linear Mode Connectivity of Neural Networks
R. Entezari
Hanie Sedghi
O. Saukh
Behnam Neyshabur
MoMe
37
216
0
12 Oct 2021
AIR-Net: Adaptive and Implicit Regularization Neural Network for Matrix
  Completion
AIR-Net: Adaptive and Implicit Regularization Neural Network for Matrix Completion
Zhemin Li
Tao Sun
Hongxia Wang
Bao Wang
47
6
0
12 Oct 2021
Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity
  on Pruned Neural Networks
Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Pruned Neural Networks
Shuai Zhang
Meng Wang
Sijia Liu
Pin-Yu Chen
Jinjun Xiong
UQCV
MLT
31
13
0
12 Oct 2021
Tighter Sparse Approximation Bounds for ReLU Neural Networks
Tighter Sparse Approximation Bounds for ReLU Neural Networks
Carles Domingo-Enrich
Youssef Mroueh
99
4
0
07 Oct 2021
On the Global Convergence of Gradient Descent for multi-layer ResNets in
  the mean-field regime
On the Global Convergence of Gradient Descent for multi-layer ResNets in the mean-field regime
Zhiyan Ding
Shi Chen
Qin Li
S. Wright
MLT
AI4CE
41
11
0
06 Oct 2021
The emergence of a concept in shallow neural networks
The emergence of a concept in shallow neural networks
E. Agliari
Francesco Alemanno
Adriano Barra
G. D. Marzo
18
39
0
01 Sep 2021
The loss landscape of deep linear neural networks: a second-order
  analysis
The loss landscape of deep linear neural networks: a second-order analysis
E. M. Achour
Franccois Malgouyres
Sébastien Gerchinovitz
ODL
24
9
0
28 Jul 2021
Analytic Study of Families of Spurious Minima in Two-Layer ReLU Neural
  Networks: A Tale of Symmetry II
Analytic Study of Families of Spurious Minima in Two-Layer ReLU Neural Networks: A Tale of Symmetry II
Yossi Arjevani
M. Field
28
18
0
21 Jul 2021
The Limiting Dynamics of SGD: Modified Loss, Phase Space Oscillations,
  and Anomalous Diffusion
The Limiting Dynamics of SGD: Modified Loss, Phase Space Oscillations, and Anomalous Diffusion
D. Kunin
Javier Sagastuy-Breña
Lauren Gillespie
Eshed Margalit
Hidenori Tanaka
Surya Ganguli
Daniel L. K. Yamins
31
15
0
19 Jul 2021
Dual Training of Energy-Based Models with Overparametrized Shallow
  Neural Networks
Dual Training of Energy-Based Models with Overparametrized Shallow Neural Networks
Carles Domingo-Enrich
A. Bietti
Marylou Gabrié
Joan Bruna
Eric Vanden-Eijnden
FedML
32
6
0
11 Jul 2021
Small random initialization is akin to spectral learning: Optimization
  and generalization guarantees for overparameterized low-rank matrix
  reconstruction
Small random initialization is akin to spectral learning: Optimization and generalization guarantees for overparameterized low-rank matrix reconstruction
Dominik Stöger
Mahdi Soltanolkotabi
ODL
42
75
0
28 Jun 2021
Proxy Convexity: A Unified Framework for the Analysis of Neural Networks
  Trained by Gradient Descent
Proxy Convexity: A Unified Framework for the Analysis of Neural Networks Trained by Gradient Descent
Spencer Frei
Quanquan Gu
26
25
0
25 Jun 2021
KALE Flow: A Relaxed KL Gradient Flow for Probabilities with Disjoint
  Support
KALE Flow: A Relaxed KL Gradient Flow for Probabilities with Disjoint Support
Pierre Glaser
Michael Arbel
A. Gretton
46
37
0
16 Jun 2021
The Limitations of Large Width in Neural Networks: A Deep Gaussian
  Process Perspective
The Limitations of Large Width in Neural Networks: A Deep Gaussian Process Perspective
Geoff Pleiss
John P. Cunningham
28
24
0
11 Jun 2021
The Future is Log-Gaussian: ResNets and Their Infinite-Depth-and-Width
  Limit at Initialization
The Future is Log-Gaussian: ResNets and Their Infinite-Depth-and-Width Limit at Initialization
Mufan Bill Li
Mihai Nica
Daniel M. Roy
30
33
0
07 Jun 2021
Learning particle swarming models from data with Gaussian processes
Learning particle swarming models from data with Gaussian processes
Jinchao Feng
Charles Kulick
Yunxiang Ren
Sui Tang
26
5
0
04 Jun 2021
Global Convergence of Three-layer Neural Networks in the Mean Field
  Regime
Global Convergence of Three-layer Neural Networks in the Mean Field Regime
H. Pham
Phan-Minh Nguyen
MLT
AI4CE
41
19
0
11 May 2021
Relative stability toward diffeomorphisms indicates performance in deep
  nets
Relative stability toward diffeomorphisms indicates performance in deep nets
Leonardo Petrini
Alessandro Favero
Mario Geiger
M. Wyart
OOD
38
15
0
06 May 2021
Two-layer neural networks with values in a Banach space
Two-layer neural networks with values in a Banach space
Yury Korolev
26
23
0
05 May 2021
Generalization Guarantees for Neural Architecture Search with
  Train-Validation Split
Generalization Guarantees for Neural Architecture Search with Train-Validation Split
Samet Oymak
Mingchen Li
Mahdi Soltanolkotabi
AI4CE
OOD
36
13
0
29 Apr 2021
Deep limits and cut-off phenomena for neural networks
Deep limits and cut-off phenomena for neural networks
B. Avelin
A. Karlsson
AI4CE
32
2
0
21 Apr 2021
On Energy-Based Models with Overparametrized Shallow Neural Networks
On Energy-Based Models with Overparametrized Shallow Neural Networks
Carles Domingo-Enrich
A. Bietti
Eric Vanden-Eijnden
Joan Bruna
BDL
30
9
0
15 Apr 2021
The Discovery of Dynamics via Linear Multistep Methods and Deep
  Learning: Error Estimation
The Discovery of Dynamics via Linear Multistep Methods and Deep Learning: Error Estimation
Q. Du
Yiqi Gu
Haizhao Yang
Chao Zhou
26
20
0
21 Mar 2021
Label-Imbalanced and Group-Sensitive Classification under
  Overparameterization
Label-Imbalanced and Group-Sensitive Classification under Overparameterization
Ganesh Ramachandra Kini
Orestis Paraskevas
Samet Oymak
Christos Thrampoulidis
27
93
0
02 Mar 2021
Experiments with Rich Regime Training for Deep Learning
Experiments with Rich Regime Training for Deep Learning
Xinyan Li
A. Banerjee
32
2
0
26 Feb 2021
Do Input Gradients Highlight Discriminative Features?
Do Input Gradients Highlight Discriminative Features?
Harshay Shah
Prateek Jain
Praneeth Netrapalli
AAML
FAtt
21
57
0
25 Feb 2021
Wasserstein Proximal of GANs
Wasserstein Proximal of GANs
A. Lin
Wuchen Li
Stanley Osher
Guido Montúfar
GAN
19
46
0
13 Feb 2021
A Local Convergence Theory for Mildly Over-Parameterized Two-Layer
  Neural Network
A Local Convergence Theory for Mildly Over-Parameterized Two-Layer Neural Network
Mo Zhou
Rong Ge
Chi Jin
74
44
0
04 Feb 2021
Exploring Deep Neural Networks via Layer-Peeled Model: Minority Collapse
  in Imbalanced Training
Exploring Deep Neural Networks via Layer-Peeled Model: Minority Collapse in Imbalanced Training
Cong Fang
Hangfeng He
Qi Long
Weijie J. Su
FAtt
130
165
0
29 Jan 2021
A Priori Generalization Analysis of the Deep Ritz Method for Solving
  High Dimensional Elliptic Equations
A Priori Generalization Analysis of the Deep Ritz Method for Solving High Dimensional Elliptic Equations
Jianfeng Lu
Yulong Lu
Min Wang
36
37
0
05 Jan 2021
Align, then memorise: the dynamics of learning with feedback alignment
Align, then memorise: the dynamics of learning with feedback alignment
Maria Refinetti
Stéphane dÁscoli
Ruben Ohana
Sebastian Goldt
26
36
0
24 Nov 2020
Neural collapse with unconstrained features
Neural collapse with unconstrained features
D. Mixon
Hans Parshall
Jianzong Pi
17
114
0
23 Nov 2020
Reliable Off-policy Evaluation for Reinforcement Learning
Reliable Off-policy Evaluation for Reinforcement Learning
Jie Wang
Rui Gao
H. Zha
OffRL
22
11
0
08 Nov 2020
Neural Network Approximation: Three Hidden Layers Are Enough
Neural Network Approximation: Three Hidden Layers Are Enough
Zuowei Shen
Haizhao Yang
Shijun Zhang
30
115
0
25 Oct 2020
Global optimality of softmax policy gradient with single hidden layer
  neural networks in the mean-field regime
Global optimality of softmax policy gradient with single hidden layer neural networks in the mean-field regime
Andrea Agazzi
Jianfeng Lu
13
15
0
22 Oct 2020
Prediction intervals for Deep Neural Networks
Prediction intervals for Deep Neural Networks
Tullio Mancini
Hector F. Calvo-Pardo
Jose Olmo
UQCV
OOD
23
4
0
08 Oct 2020
Deep Equals Shallow for ReLU Networks in Kernel Regimes
Deep Equals Shallow for ReLU Networks in Kernel Regimes
A. Bietti
Francis R. Bach
28
86
0
30 Sep 2020
Learning Deep ReLU Networks Is Fixed-Parameter Tractable
Learning Deep ReLU Networks Is Fixed-Parameter Tractable
Sitan Chen
Adam R. Klivans
Raghu Meka
22
36
0
28 Sep 2020
Machine Learning and Computational Mathematics
Machine Learning and Computational Mathematics
Weinan E
PINN
AI4CE
26
61
0
23 Sep 2020
Towards a Mathematical Understanding of Neural Network-Based Machine
  Learning: what we know and what we don't
Towards a Mathematical Understanding of Neural Network-Based Machine Learning: what we know and what we don't
E. Weinan
Chao Ma
Stephan Wojtowytsch
Lei Wu
AI4CE
22
133
0
22 Sep 2020
Deep Networks and the Multiple Manifold Problem
Deep Networks and the Multiple Manifold Problem
Sam Buchanan
D. Gilboa
John N. Wright
166
39
0
25 Aug 2020
Geometric compression of invariant manifolds in neural nets
Geometric compression of invariant manifolds in neural nets
J. Paccolat
Leonardo Petrini
Mario Geiger
Kevin Tyloo
M. Wyart
MLT
55
34
0
22 Jul 2020
Maximum likelihood estimation of potential energy in interacting
  particle systems from single-trajectory data
Maximum likelihood estimation of potential energy in interacting particle systems from single-trajectory data
Xiaohui Chen
28
25
0
21 Jul 2020
Quantitative Propagation of Chaos for SGD in Wide Neural Networks
Quantitative Propagation of Chaos for SGD in Wide Neural Networks
Valentin De Bortoli
Alain Durmus
Xavier Fontaine
Umut Simsekli
27
25
0
13 Jul 2020
Two-Layer Neural Networks for Partial Differential Equations:
  Optimization and Generalization Theory
Two-Layer Neural Networks for Partial Differential Equations: Optimization and Generalization Theory
Tao Luo
Haizhao Yang
32
73
0
28 Jun 2020
The Gaussian equivalence of generative models for learning with shallow
  neural networks
The Gaussian equivalence of generative models for learning with shallow neural networks
Sebastian Goldt
Bruno Loureiro
Galen Reeves
Florent Krzakala
M. Mézard
Lenka Zdeborová
BDL
41
100
0
25 Jun 2020
An analytic theory of shallow networks dynamics for hinge loss
  classification
An analytic theory of shallow networks dynamics for hinge loss classification
Franco Pellegrini
Giulio Biroli
35
19
0
19 Jun 2020
Previous
12345
Next