ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1609.01037
  4. Cited By
Distribution-Specific Hardness of Learning Neural Networks

Distribution-Specific Hardness of Learning Neural Networks

5 September 2016
Ohad Shamir
ArXivPDFHTML

Papers citing "Distribution-Specific Hardness of Learning Neural Networks"

14 / 14 papers shown
Title
Low-dimensional Functions are Efficiently Learnable under Randomly Biased Distributions
Elisabetta Cornacchia
Dan Mikulincer
Elchanan Mossel
89
1
0
10 Feb 2025
On the Complexity of Learning Neural Networks
On the Complexity of Learning Neural Networks
Le Song
Santosh Vempala
John Wilmes
Bo Xie
29
59
0
14 Jul 2017
Deep Semi-Random Features for Nonlinear Function Approximation
Deep Semi-Random Features for Nonlinear Function Approximation
Kenji Kawaguchi
Bo Xie
Vikas Verma
Le Song
108
15
0
28 Feb 2017
Exponentially vanishing sub-optimal local minima in multilayer neural
  networks
Exponentially vanishing sub-optimal local minima in multilayer neural networks
Daniel Soudry
Elad Hoffer
108
97
0
19 Feb 2017
No bad local minima: Data independent training error guarantees for
  multilayer neural networks
No bad local minima: Data independent training error guarantees for multilayer neural networks
Daniel Soudry
Y. Carmon
104
235
0
26 May 2016
Toward Deeper Understanding of Neural Networks: The Power of
  Initialization and a Dual View on Expressivity
Toward Deeper Understanding of Neural Networks: The Power of Initialization and a Dual View on Expressivity
Amit Daniely
Roy Frostig
Y. Singer
88
343
0
18 Feb 2016
Learning Halfspaces and Neural Networks with Random Initialization
Learning Halfspaces and Neural Networks with Random Initialization
Yuchen Zhang
Jason D. Lee
Martin J. Wainwright
Michael I. Jordan
36
35
0
25 Nov 2015
On the Quality of the Initial Basin in Overspecified Neural Networks
On the Quality of the Initial Basin in Overspecified Neural Networks
Itay Safran
Ohad Shamir
51
127
0
13 Nov 2015
Beyond Convexity: Stochastic Quasi-Convex Optimization
Beyond Convexity: Stochastic Quasi-Convex Optimization
Elad Hazan
Kfir Y. Levy
Shai Shalev-Shwartz
40
175
0
08 Jul 2015
Batch Normalization: Accelerating Deep Network Training by Reducing
  Internal Covariate Shift
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Sergey Ioffe
Christian Szegedy
OOD
292
43,154
0
11 Feb 2015
The Loss Surfaces of Multilayer Networks
The Loss Surfaces of Multilayer Networks
A. Choromańska
Mikael Henaff
Michaël Mathieu
Gerard Ben Arous
Yann LeCun
ODL
225
1,191
0
30 Nov 2014
On the Computational Efficiency of Training Neural Networks
On the Computational Efficiency of Training Neural Networks
Roi Livni
Shai Shalev-Shwartz
Ohad Shamir
65
479
0
05 Oct 2014
Complexity theoretic limitations on learning DNF's
Complexity theoretic limitations on learning DNF's
Amit Daniely
Shai Shalev-Shwartz
41
112
0
13 Apr 2014
Provable Bounds for Learning Some Deep Representations
Provable Bounds for Learning Some Deep Representations
Sanjeev Arora
Aditya Bhaskara
Rong Ge
Tengyu Ma
BDL
58
333
0
23 Oct 2013
1