ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1904.12191
  4. Cited By
Linearized two-layers neural networks in high dimension

Linearized two-layers neural networks in high dimension

27 April 2019
Behrooz Ghorbani
Song Mei
Theodor Misiakiewicz
Andrea Montanari
    MLT
ArXivPDFHTML

Papers citing "Linearized two-layers neural networks in high dimension"

34 / 34 papers shown
Title
Fine-Grained Verifiers: Preference Modeling as Next-token Prediction in Vision-Language Alignment
Fine-Grained Verifiers: Preference Modeling as Next-token Prediction in Vision-Language Alignment
Chenhang Cui
An Zhang
Yiyang Zhou
Zhaorun Chen
Gelei Deng
Huaxiu Yao
Tat-Seng Chua
110
5
0
18 Oct 2024
Understanding Optimal Feature Transfer via a Fine-Grained Bias-Variance Analysis
Understanding Optimal Feature Transfer via a Fine-Grained Bias-Variance Analysis
Yufan Li
Subhabrata Sen
Ben Adlam
MLT
119
1
0
18 Apr 2024
Nonlinear Meta-Learning Can Guarantee Faster Rates
Nonlinear Meta-Learning Can Guarantee Faster Rates
Dimitri Meunier
Zhu Li
Arthur Gretton
Samory Kpotufe
66
6
0
20 Jul 2023
Provable Guarantees for Nonlinear Feature Learning in Three-Layer Neural Networks
Provable Guarantees for Nonlinear Feature Learning in Three-Layer Neural Networks
Eshaan Nichani
Alexandru Damian
Jason D. Lee
MLT
118
15
0
11 May 2023
Learning time-scales in two-layers neural networks
Learning time-scales in two-layers neural networks
Raphael Berthier
Andrea Montanari
Kangjie Zhou
89
33
0
28 Feb 2023
Generalisation under gradient descent via deterministic PAC-Bayes
Generalisation under gradient descent via deterministic PAC-Bayes
Eugenio Clerico
Tyler Farghly
George Deligiannidis
Benjamin Guedj
Arnaud Doucet
54
4
0
06 Sep 2022
On the Multiple Descent of Minimum-Norm Interpolants and Restricted
  Lower Isometry of Kernels
On the Multiple Descent of Minimum-Norm Interpolants and Restricted Lower Isometry of Kernels
Tengyuan Liang
Alexander Rakhlin
Xiyu Zhai
44
29
0
27 Aug 2019
The generalization error of random features regression: Precise
  asymptotics and double descent curve
The generalization error of random features regression: Precise asymptotics and double descent curve
Song Mei
Andrea Montanari
73
631
0
14 Aug 2019
Limitations of Lazy Training of Two-layers Neural Networks
Limitations of Lazy Training of Two-layers Neural Networks
Behrooz Ghorbani
Song Mei
Theodor Misiakiewicz
Andrea Montanari
MLT
45
143
0
21 Jun 2019
Disentangling feature and lazy training in deep neural networks
Disentangling feature and lazy training in deep neural networks
Mario Geiger
S. Spigler
Arthur Jacot
Matthieu Wyart
79
17
0
19 Jun 2019
On the Power and Limitations of Random Features for Understanding Neural
  Networks
On the Power and Limitations of Random Features for Understanding Neural Networks
Gilad Yehudai
Ohad Shamir
MLT
53
182
0
01 Apr 2019
Two models of double descent for weak features
Two models of double descent for weak features
M. Belkin
Daniel J. Hsu
Ji Xu
80
375
0
18 Mar 2019
Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient
  Descent
Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent
Jaehoon Lee
Lechao Xiao
S. Schoenholz
Yasaman Bahri
Roman Novak
Jascha Narain Sohl-Dickstein
Jeffrey Pennington
153
1,089
0
18 Feb 2019
Mean-field theory of two-layers neural networks: dimension-free bounds
  and kernel limit
Mean-field theory of two-layers neural networks: dimension-free bounds and kernel limit
Song Mei
Theodor Misiakiewicz
Andrea Montanari
MLT
68
277
0
16 Feb 2019
Fine-Grained Analysis of Optimization and Generalization for
  Overparameterized Two-Layer Neural Networks
Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks
Sanjeev Arora
S. Du
Wei Hu
Zhiyuan Li
Ruosong Wang
MLT
142
966
0
24 Jan 2019
Reconciling modern machine learning practice and the bias-variance
  trade-off
Reconciling modern machine learning practice and the bias-variance trade-off
M. Belkin
Daniel J. Hsu
Siyuan Ma
Soumik Mandal
174
1,628
0
28 Dec 2018
On Lazy Training in Differentiable Programming
On Lazy Training in Differentiable Programming
Lénaïc Chizat
Edouard Oyallon
Francis R. Bach
89
823
0
19 Dec 2018
Stochastic Gradient Descent Optimizes Over-parameterized Deep ReLU
  Networks
Stochastic Gradient Descent Optimizes Over-parameterized Deep ReLU Networks
Difan Zou
Yuan Cao
Dongruo Zhou
Quanquan Gu
ODL
133
448
0
21 Nov 2018
A Convergence Theory for Deep Learning via Over-Parameterization
A Convergence Theory for Deep Learning via Over-Parameterization
Zeyuan Allen-Zhu
Yuanzhi Li
Zhao Song
AI4CE
ODL
190
1,457
0
09 Nov 2018
Gradient Descent Finds Global Minima of Deep Neural Networks
Gradient Descent Finds Global Minima of Deep Neural Networks
S. Du
Jason D. Lee
Haochuan Li
Liwei Wang
Masayoshi Tomizuka
ODL
143
1,133
0
09 Nov 2018
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
S. Du
Xiyu Zhai
Barnabás Póczós
Aarti Singh
MLT
ODL
155
1,261
0
04 Oct 2018
Does data interpolation contradict statistical optimality?
Does data interpolation contradict statistical optimality?
M. Belkin
Alexander Rakhlin
Alexandre B. Tsybakov
64
218
0
25 Jun 2018
Neural Tangent Kernel: Convergence and Generalization in Neural Networks
Neural Tangent Kernel: Convergence and Generalization in Neural Networks
Arthur Jacot
Franck Gabriel
Clément Hongler
193
3,160
0
20 Jun 2018
On the Global Convergence of Gradient Descent for Over-parameterized
  Models using Optimal Transport
On the Global Convergence of Gradient Descent for Over-parameterized Models using Optimal Transport
Lénaïc Chizat
Francis R. Bach
OT
157
731
0
24 May 2018
Gradient Descent for One-Hidden-Layer Neural Networks: Polynomial
  Convergence and SQ Lower Bounds
Gradient Descent for One-Hidden-Layer Neural Networks: Polynomial Convergence and SQ Lower Bounds
Santosh Vempala
John Wilmes
MLT
69
50
0
07 May 2018
A Mean Field View of the Landscape of Two-Layers Neural Networks
A Mean Field View of the Landscape of Two-Layers Neural Networks
Song Mei
Andrea Montanari
Phan-Minh Nguyen
MLT
76
855
0
18 Apr 2018
To understand deep learning we need to understand kernel learning
To understand deep learning we need to understand kernel learning
M. Belkin
Siyuan Ma
Soumik Mandal
44
418
0
05 Feb 2018
The Implicit Bias of Gradient Descent on Separable Data
The Implicit Bias of Gradient Descent on Separable Data
Daniel Soudry
Elad Hoffer
Mor Shpigel Nacson
Suriya Gunasekar
Nathan Srebro
83
908
0
27 Oct 2017
The Landscape of Empirical Risk for Non-convex Losses
The Landscape of Empirical Risk for Non-convex Losses
Song Mei
Yu Bai
Andrea Montanari
52
312
0
22 Jul 2016
Generalization Properties of Learning with Random Features
Generalization Properties of Learning with Random Features
Alessandro Rudi
Lorenzo Rosasco
MLT
60
330
0
14 Feb 2016
Breaking the Curse of Dimensionality with Convex Neural Networks
Breaking the Curse of Dimensionality with Convex Neural Networks
Francis R. Bach
121
703
0
30 Dec 2014
Sharp analysis of low-rank kernel matrix approximations
Sharp analysis of low-rank kernel matrix approximations
Francis R. Bach
116
282
0
09 Aug 2012
On information plus noise kernel random matrices
On information plus noise kernel random matrices
N. Karoui
54
65
0
11 Nov 2010
The spectrum of kernel random matrices
The spectrum of kernel random matrices
N. Karoui
131
223
0
04 Jan 2010
1