ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.16980
  4. Cited By
Infinite-width limit of deep linear neural networks

Infinite-width limit of deep linear neural networks

29 November 2022
Lénaïc Chizat
Maria Colombo
Xavier Fernández-Real
Alessio Figalli
ArXivPDFHTML

Papers citing "Infinite-width limit of deep linear neural networks"

9 / 9 papers shown
Title
How Feature Learning Can Improve Neural Scaling Laws
How Feature Learning Can Improve Neural Scaling Laws
Blake Bordelon
Alexander B. Atanasov
Cengiz Pehlevan
70
14
0
26 Sep 2024
Gradient flows on graphons: existence, convergence, continuity equations
Gradient flows on graphons: existence, convergence, continuity equations
Sewoong Oh
Soumik Pal
Raghav Somani
Raghavendra Tripathi
27
5
0
18 Nov 2021
Implicit Bias of SGD for Diagonal Linear Networks: a Provable Benefit of
  Stochasticity
Implicit Bias of SGD for Diagonal Linear Networks: a Provable Benefit of Stochasticity
Scott Pesme
Loucas Pillaud-Vivien
Nicolas Flammarion
39
100
0
17 Jun 2021
Towards a Mathematical Understanding of Neural Network-Based Machine
  Learning: what we know and what we don't
Towards a Mathematical Understanding of Neural Network-Based Machine Learning: what we know and what we don't
E. Weinan
Chao Ma
Stephan Wojtowytsch
Lei Wu
AI4CE
61
134
0
22 Sep 2020
Learning deep linear neural networks: Riemannian gradient flows and
  convergence to global minimizers
Learning deep linear neural networks: Riemannian gradient flows and convergence to global minimizers
B. Bah
Holger Rauhut
Ulrich Terstiege
Michael Westdickenberg
MLT
8
62
0
12 Oct 2019
Kernel and Rich Regimes in Overparametrized Models
Blake E. Woodworth
Suriya Gunasekar
Pedro H. P. Savarese
E. Moroshko
Itay Golan
Jason D. Lee
Daniel Soudry
Nathan Srebro
55
358
0
13 Jun 2019
Implicit Regularization in Deep Matrix Factorization
Implicit Regularization in Deep Matrix Factorization
Sanjeev Arora
Nadav Cohen
Wei Hu
Yuping Luo
AI4CE
61
500
0
31 May 2019
Width Provably Matters in Optimization for Deep Linear Neural Networks
Width Provably Matters in Optimization for Deep Linear Neural Networks
S. Du
Wei Hu
53
94
0
24 Jan 2019
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
S. Du
Xiyu Zhai
Barnabás Póczós
Aarti Singh
MLT
ODL
127
1,261
0
04 Oct 2018
1