Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1906.04868
Cited By
Semi-flat minima and saddle points by embedding neural networks to overparameterization
12 June 2019
Kenji Fukumizu
Shoichiro Yamaguchi
Yoh-ichi Mototake
Mirai Tanaka
3DPC
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Semi-flat minima and saddle points by embedding neural networks to overparameterization"
11 / 11 papers shown
Title
Uncovering Critical Sets of Deep Neural Networks via Sample-Independent Critical Lifting
Leyang Zhang
Yaoyu Zhang
Tao Luo
BDL
9
0
0
19 May 2025
Connectivity Shapes Implicit Regularization in Matrix Factorization Models for Matrix Completion
Zhiwei Bai
Jiajie Zhao
Yaoyu Zhang
AI4CE
37
0
0
22 May 2024
Loss Landscape of Shallow ReLU-like Neural Networks: Stationary Points, Saddle Escape, and Network Embedding
Zhengqing Wu
Berfin Simsek
Francois Ged
ODL
48
0
0
08 Feb 2024
Proximity to Losslessly Compressible Parameters
Matthew Farrugia-Roberts
30
0
0
05 Jun 2023
Loss Spike in Training Neural Networks
Zhongwang Zhang
Z. Xu
36
5
0
20 May 2023
Understanding the Initial Condensation of Convolutional Neural Networks
Zhangchen Zhou
Hanxu Zhou
Yuqing Li
Zhi-Qin John Xu
MLT
AI4CE
26
5
0
17 May 2023
Functional Equivalence and Path Connectivity of Reducible Hyperbolic Tangent Networks
Matthew Farrugia-Roberts
30
4
0
08 May 2023
Linear Stability Hypothesis and Rank Stratification for Nonlinear Models
Yaoyu Zhang
Zhongwang Zhang
Leyang Zhang
Zhiwei Bai
Tao Luo
Z. Xu
27
7
0
21 Nov 2022
Embedding Principle in Depth for the Loss Landscape Analysis of Deep Neural Networks
Zhiwei Bai
Tao Luo
Z. Xu
Yaoyu Zhang
31
5
0
26 May 2022
Embedding Principle: a hierarchical structure of loss landscape of deep neural networks
Yaoyu Zhang
Yuqing Li
Zhongwang Zhang
Tao Luo
Z. Xu
29
22
0
30 Nov 2021
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
308
2,892
0
15 Sep 2016
1