Pruning artificial neural networks: a way to find well-generalizing,
  high-entropy sharp minima

Pruning artificial neural networks: a way to find well-generalizing, high-entropy sharp minima

Papers citing "Pruning artificial neural networks: a way to find well-generalizing, high-entropy sharp minima"

21 / 21 papers shown
Title
Do Deep Nets Really Need to be Deep?
Do Deep Nets Really Need to be Deep?
Lei Jimmy Ba
R. Caruana
188
2,120
0
21 Dec 2013

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.