ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1803.00909
  4. Cited By
Understanding the Loss Surface of Neural Networks for Binary
  Classification

Understanding the Loss Surface of Neural Networks for Binary Classification

19 February 2018
Shiyu Liang
Ruoyu Sun
Yixuan Li
R. Srikant
ArXivPDFHTML

Papers citing "Understanding the Loss Surface of Neural Networks for Binary Classification"

10 / 60 papers shown
Title
Non-attracting Regions of Local Minima in Deep and Wide Neural Networks
Non-attracting Regions of Local Minima in Deep and Wide Neural Networks
Henning Petzka
C. Sminchisescu
29
9
0
16 Dec 2018
Measure, Manifold, Learning, and Optimization: A Theory Of Neural
  Networks
Measure, Manifold, Learning, and Optimization: A Theory Of Neural Networks
Shuai Li
25
2
0
30 Nov 2018
Learning One-hidden-layer Neural Networks under General Input
  Distributions
Learning One-hidden-layer Neural Networks under General Input Distributions
Weihao Gao
Ashok Vardhan Makkuva
Sewoong Oh
Pramod Viswanath
MLT
27
28
0
09 Oct 2018
On the loss landscape of a class of deep neural networks with no bad
  local valleys
On the loss landscape of a class of deep neural networks with no bad local valleys
Quynh N. Nguyen
Mahesh Chandra Mukkamala
Matthias Hein
16
87
0
27 Sep 2018
Deep Neural Networks with Multi-Branch Architectures Are Less Non-Convex
Deep Neural Networks with Multi-Branch Architectures Are Less Non-Convex
Hongyang R. Zhang
Junru Shao
Ruslan Salakhutdinov
39
14
0
06 Jun 2018
Adding One Neuron Can Eliminate All Bad Local Minima
Adding One Neuron Can Eliminate All Bad Local Minima
Shiyu Liang
Ruoyu Sun
J. Lee
R. Srikant
37
89
0
22 May 2018
Are ResNets Provably Better than Linear Predictors?
Are ResNets Provably Better than Linear Predictors?
Ohad Shamir
17
54
0
18 Apr 2018
Small nonlinearities in activation functions create bad local minima in
  neural networks
Small nonlinearities in activation functions create bad local minima in neural networks
Chulhee Yun
S. Sra
Ali Jadbabaie
ODL
22
93
0
10 Feb 2018
Global optimality conditions for deep neural networks
Global optimality conditions for deep neural networks
Chulhee Yun
S. Sra
Ali Jadbabaie
128
117
0
08 Jul 2017
The Loss Surfaces of Multilayer Networks
The Loss Surfaces of Multilayer Networks
A. Choromańska
Mikael Henaff
Michaël Mathieu
Gerard Ben Arous
Yann LeCun
ODL
183
1,185
0
30 Nov 2014
Previous
12