ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2301.12309
  4. Cited By
On the Lipschitz Constant of Deep Networks and Double Descent

On the Lipschitz Constant of Deep Networks and Double Descent

28 January 2023
Matteo Gamba
Hossein Azizpour
Marten Bjorkman
ArXivPDFHTML

Papers citing "On the Lipschitz Constant of Deep Networks and Double Descent"

10 / 10 papers shown
Title
On Space Folds of ReLU Neural Networks
On Space Folds of ReLU Neural Networks
Michal Lewandowski
Hamid Eghbalzadeh
Bernhard Heinzl
Raphael Pisoni
Bernhard A.Moser
MLT
81
1
0
17 Feb 2025
Understanding the Role of Optimization in Double Descent
Understanding the Role of Optimization in Double Descent
Chris Liu
Jeffrey Flanigan
32
0
0
06 Dec 2023
Outliers with Opposing Signals Have an Outsized Effect on Neural Network
  Optimization
Outliers with Opposing Signals Have an Outsized Effect on Neural Network Optimization
Elan Rosenfeld
Andrej Risteski
25
10
0
07 Nov 2023
On progressive sharpening, flat minima and generalisation
On progressive sharpening, flat minima and generalisation
L. MacDonald
Jack Valmadre
Simon Lucey
25
4
0
24 May 2023
Some Fundamental Aspects about Lipschitz Continuity of Neural Networks
Some Fundamental Aspects about Lipschitz Continuity of Neural Networks
Grigory Khromov
Sidak Pal Singh
29
7
0
21 Feb 2023
Why neural networks find simple solutions: the many regularizers of
  geometric complexity
Why neural networks find simple solutions: the many regularizers of geometric complexity
Benoit Dherin
Michael Munn
M. Rosca
David Barrett
53
30
0
27 Sep 2022
Deep Double Descent via Smooth Interpolation
Deep Double Descent via Smooth Interpolation
Matteo Gamba
Erik Englesson
Marten Bjorkman
Hossein Azizpour
63
10
0
21 Sep 2022
What Happens after SGD Reaches Zero Loss? --A Mathematical Framework
What Happens after SGD Reaches Zero Loss? --A Mathematical Framework
Zhiyuan Li
Tianhao Wang
Sanjeev Arora
MLT
90
98
0
13 Oct 2021
Benefits of depth in neural networks
Benefits of depth in neural networks
Matus Telgarsky
148
602
0
14 Feb 2016
Norm-Based Capacity Control in Neural Networks
Norm-Based Capacity Control in Neural Networks
Behnam Neyshabur
Ryota Tomioka
Nathan Srebro
119
577
0
27 Feb 2015
1