ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.14417
  4. Cited By
Overparameterization of deep ResNet: zero loss and mean-field analysis

Overparameterization of deep ResNet: zero loss and mean-field analysis

30 May 2021
Zhiyan Ding
Shi Chen
Qin Li
S. Wright
    ODL
ArXivPDFHTML

Papers citing "Overparameterization of deep ResNet: zero loss and mean-field analysis"

8 / 8 papers shown
Title
A Local Polyak-Lojasiewicz and Descent Lemma of Gradient Descent For Overparametrized Linear Models
A Local Polyak-Lojasiewicz and Descent Lemma of Gradient Descent For Overparametrized Linear Models
Ziqing Xu
Hancheng Min
Salma Tarmoun
Enrique Mallada
Rene Vidal
9
0
0
16 May 2025
Understanding the training of infinitely deep and wide ResNets with
  Conditional Optimal Transport
Understanding the training of infinitely deep and wide ResNets with Conditional Optimal Transport
Raphael Barboni
Gabriel Peyré
Franccois-Xavier Vialard
39
3
0
19 Mar 2024
Accelerating optimization over the space of probability measures
Accelerating optimization over the space of probability measures
Shi Chen
Wenxuan Wu
Yuhang Yao
Stephen J. Wright
32
5
0
06 Oct 2023
High-dimensional scaling limits and fluctuations of online least-squares
  SGD with smooth covariance
High-dimensional scaling limits and fluctuations of online least-squares SGD with smooth covariance
Krishnakumar Balasubramanian
Promit Ghosal
Ye He
40
5
0
03 Apr 2023
A Functional-Space Mean-Field Theory of Partially-Trained Three-Layer
  Neural Networks
A Functional-Space Mean-Field Theory of Partially-Trained Three-Layer Neural Networks
Zhengdao Chen
Eric Vanden-Eijnden
Joan Bruna
MLT
27
5
0
28 Oct 2022
On the Global Convergence of Gradient Descent for multi-layer ResNets in
  the mean-field regime
On the Global Convergence of Gradient Descent for multi-layer ResNets in the mean-field regime
Zhiyan Ding
Shi Chen
Qin Li
S. Wright
MLT
AI4CE
43
11
0
06 Oct 2021
Representing smooth functions as compositions of near-identity functions
  with implications for deep network optimization
Representing smooth functions as compositions of near-identity functions with implications for deep network optimization
Peter L. Bartlett
S. Evans
Philip M. Long
76
31
0
13 Apr 2018
Global optimality conditions for deep neural networks
Global optimality conditions for deep neural networks
Chulhee Yun
S. Sra
Ali Jadbabaie
128
117
0
08 Jul 2017
1