ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2003.00307
  4. Cited By
Loss landscapes and optimization in over-parameterized non-linear
  systems and neural networks
v1v2 (latest)

Loss landscapes and optimization in over-parameterized non-linear systems and neural networks

29 February 2020
Chaoyue Liu
Libin Zhu
M. Belkin
    ODL
ArXiv (abs)PDFHTML

Papers citing "Loss landscapes and optimization in over-parameterized non-linear systems and neural networks"

18 / 168 papers shown
Title
On generalization bounds for deep networks based on loss surface
  implicit regularization
On generalization bounds for deep networks based on loss surface implicit regularization
Masaaki Imaizumi
Johannes Schmidt-Hieber
ODL
79
3
0
12 Jan 2022
Training Multi-Layer Over-Parametrized Neural Network in Subquadratic
  Time
Training Multi-Layer Over-Parametrized Neural Network in Subquadratic Time
Zhao Song
Licheng Zhang
Ruizhe Zhang
116
66
0
14 Dec 2021
Faster Single-loop Algorithms for Minimax Optimization without Strong
  Concavity
Faster Single-loop Algorithms for Minimax Optimization without Strong Concavity
Junchi Yang
Antonio Orvieto
Aurelien Lucchi
Niao He
109
64
0
10 Dec 2021
Global convergence of ResNets: From finite to infinite width using
  linear parameterization
Global convergence of ResNets: From finite to infinite width using linear parameterization
Raphael Barboni
Gabriel Peyré
Franccois-Xavier Vialard
68
12
0
10 Dec 2021
On the Equivalence between Neural Network and Support Vector Machine
On the Equivalence between Neural Network and Support Vector Machine
Yilan Chen
Wei Huang
Lam M. Nguyen
Tsui-Wei Weng
AAML
73
18
0
11 Nov 2021
Theoretical Exploration of Flexible Transmitter Model
Theoretical Exploration of Flexible Transmitter Model
Jin-Hui Wu
Shao-Qun Zhang
Yuan Jiang
Zhiping Zhou
63
3
0
11 Nov 2021
Subquadratic Overparameterization for Shallow Neural Networks
Subquadratic Overparameterization for Shallow Neural Networks
Chaehwan Song
Ali Ramezani-Kebrya
Thomas Pethick
Armin Eftekhari
Volkan Cevher
81
31
0
02 Nov 2021
The Role of Permutation Invariance in Linear Mode Connectivity of Neural
  Networks
The Role of Permutation Invariance in Linear Mode Connectivity of Neural Networks
R. Entezari
Hanie Sedghi
O. Saukh
Behnam Neyshabur
MoMe
102
238
0
12 Oct 2021
How much pre-training is enough to discover a good subnetwork?
How much pre-training is enough to discover a good subnetwork?
Cameron R. Wolfe
Fangshuo Liao
Qihan Wang
Junhyung Lyle Kim
Anastasios Kyrillidis
90
3
0
31 Jul 2021
Stability & Generalisation of Gradient Descent for Shallow Neural
  Networks without the Neural Tangent Kernel
Stability & Generalisation of Gradient Descent for Shallow Neural Networks without the Neural Tangent Kernel
Dominic Richards
Ilja Kuzborskij
82
29
0
27 Jul 2021
Proxy Convexity: A Unified Framework for the Analysis of Neural Networks
  Trained by Gradient Descent
Proxy Convexity: A Unified Framework for the Analysis of Neural Networks Trained by Gradient Descent
Spencer Frei
Quanquan Gu
98
26
0
25 Jun 2021
Fit without fear: remarkable mathematical phenomena of deep learning
  through the prism of interpolation
Fit without fear: remarkable mathematical phenomena of deep learning through the prism of interpolation
M. Belkin
73
186
0
29 May 2021
Decentralized Federated Averaging
Decentralized Federated Averaging
Tao Sun
Dongsheng Li
Bao Wang
FedML
98
218
0
23 Apr 2021
Learning with Gradient Descent and Weakly Convex Losses
Learning with Gradient Descent and Weakly Convex Losses
Dominic Richards
Michael G. Rabbat
MLT
71
15
0
13 Jan 2021
Characterization of Excess Risk for Locally Strongly Convex Population
  Risk
Characterization of Excess Risk for Locally Strongly Convex Population Risk
Mingyang Yi
Ruoyu Wang
Zhi-Ming Ma
44
2
0
04 Dec 2020
Deterministic tensor completion with hypergraph expanders
Deterministic tensor completion with hypergraph expanders
K. Harris
Yizhe Zhu
89
11
0
23 Oct 2019
On the Benefit of Width for Neural Networks: Disappearance of Bad Basins
On the Benefit of Width for Neural Networks: Disappearance of Bad Basins
Dawei Li
Tian Ding
Ruoyu Sun
133
39
0
28 Dec 2018
Newton-MR: Inexact Newton Method With Minimum Residual Sub-problem
  Solver
Newton-MR: Inexact Newton Method With Minimum Residual Sub-problem Solver
Fred Roosta
Yang Liu
Peng Xu
Michael W. Mahoney
75
15
0
30 Sep 2018
Previous
1234