ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2412.16765
76
0

Optimization Insights into Deep Diagonal Linear Networks

21 December 2024
Hippolyte Labarrière
C. Molinari
Lorenzo Rosasco
S. Villa
Cristian Vega
ArXivPDFHTML
Abstract

Overparameterized models trained with (stochastic) gradient descent are ubiquitous in modern machine learning. These large models achieve unprecedented performance on test data, but their theoretical understanding is still limited. In this paper, we take a step towards filling this gap by adopting an optimization perspective. More precisely, we study the implicit regularization properties of the gradient flow "algorithm" for estimating the parameters of a deep diagonal neural network. Our main contribution is showing that this gradient flow induces a mirror flow dynamic on the model, meaning that it is biased towards a specific solution of the problem depending on the initialization of the network. Along the way, we prove several properties of the trajectory.

View on arXiv
@article{labarrière2025_2412.16765,
  title={ Optimization Insights into Deep Diagonal Linear Networks },
  author={ Hippolyte Labarrière and Cesare Molinari and Lorenzo Rosasco and Silvia Villa and Cristian Vega },
  journal={arXiv preprint arXiv:2412.16765},
  year={ 2025 }
}
Comments on this paper