ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.06384
  4. Cited By
Spurious Valleys in Two-layer Neural Network Optimization Landscapes

Spurious Valleys in Two-layer Neural Network Optimization Landscapes

18 February 2018
Luca Venturi
Afonso S. Bandeira
Joan Bruna
ArXivPDFHTML

Papers citing "Spurious Valleys in Two-layer Neural Network Optimization Landscapes"

14 / 14 papers shown
Title
NTK-SAP: Improving neural network pruning by aligning training dynamics
NTK-SAP: Improving neural network pruning by aligning training dynamics
Yite Wang
Dawei Li
Ruoyu Sun
39
19
0
06 Apr 2023
When Expressivity Meets Trainability: Fewer than $n$ Neurons Can Work
When Expressivity Meets Trainability: Fewer than nnn Neurons Can Work
Jiawei Zhang
Yushun Zhang
Mingyi Hong
Ruoyu Sun
Zhi-Quan Luo
26
10
0
21 Oct 2022
Exponentially Many Local Minima in Quantum Neural Networks
Exponentially Many Local Minima in Quantum Neural Networks
Xuchen You
Xiaodi Wu
72
51
0
06 Oct 2021
Analyzing Monotonic Linear Interpolation in Neural Network Loss
  Landscapes
Analyzing Monotonic Linear Interpolation in Neural Network Loss Landscapes
James Lucas
Juhan Bae
Michael Ruogu Zhang
Stanislav Fort
R. Zemel
Roger C. Grosse
MoMe
164
28
0
22 Apr 2021
Noisy Gradient Descent Converges to Flat Minima for Nonconvex Matrix
  Factorization
Noisy Gradient Descent Converges to Flat Minima for Nonconvex Matrix Factorization
Tianyi Liu
Yan Li
S. Wei
Enlu Zhou
T. Zhao
21
13
0
24 Feb 2021
A Convergence Theory Towards Practical Over-parameterized Deep Neural
  Networks
A Convergence Theory Towards Practical Over-parameterized Deep Neural Networks
Asaf Noy
Yi Tian Xu
Y. Aflalo
Lihi Zelnik-Manor
R. L. Jin
36
3
0
12 Jan 2021
Compressive sensing with un-trained neural networks: Gradient descent
  finds the smoothest approximation
Compressive sensing with un-trained neural networks: Gradient descent finds the smoothest approximation
Reinhard Heckel
Mahdi Soltanolkotabi
6
79
0
07 May 2020
Optimization for deep learning: theory and algorithms
Optimization for deep learning: theory and algorithms
Ruoyu Sun
ODL
19
168
0
19 Dec 2019
Denoising and Regularization via Exploiting the Structural Bias of
  Convolutional Generators
Denoising and Regularization via Exploiting the Structural Bias of Convolutional Generators
Reinhard Heckel
Mahdi Soltanolkotabi
DiffM
35
81
0
31 Oct 2019
Generating Accurate Pseudo-labels in Semi-Supervised Learning and
  Avoiding Overconfident Predictions via Hermite Polynomial Activations
Generating Accurate Pseudo-labels in Semi-Supervised Learning and Avoiding Overconfident Predictions via Hermite Polynomial Activations
Vishnu Suresh Lokhande
Songwong Tasneeyapant
Abhay Venkatesh
Sathya Ravi
Vikas Singh
16
29
0
12 Sep 2019
On the Expressive Power of Deep Polynomial Neural Networks
On the Expressive Power of Deep Polynomial Neural Networks
Joe Kileel
Matthew Trager
Joan Bruna
17
82
0
29 May 2019
Gradient Descent with Early Stopping is Provably Robust to Label Noise
  for Overparameterized Neural Networks
Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks
Mingchen Li
Mahdi Soltanolkotabi
Samet Oymak
NoLa
47
351
0
27 Mar 2019
On the Global Convergence of Gradient Descent for Over-parameterized
  Models using Optimal Transport
On the Global Convergence of Gradient Descent for Over-parameterized Models using Optimal Transport
Lénaïc Chizat
Francis R. Bach
OT
63
724
0
24 May 2018
The Loss Surfaces of Multilayer Networks
The Loss Surfaces of Multilayer Networks
A. Choromańska
Mikael Henaff
Michaël Mathieu
Gerard Ben Arous
Yann LeCun
ODL
183
1,185
0
30 Nov 2014
1