ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1801.02254
20
71

Theory of Deep Learning IIb: Optimization Properties of SGD

7 January 2018
Chiyuan Zhang
Q. Liao
Alexander Rakhlin
Brando Miranda
Noah Golowich
T. Poggio
    ODL
ArXivPDFHTML
Abstract

In Theory IIb we characterize with a mix of theory and experiments the optimization of deep convolutional networks by Stochastic Gradient Descent. The main new result in this paper is theoretical and experimental evidence for the following conjecture about SGD: SGD concentrates in probability -- like the classical Langevin equation -- on large volume, "flat" minima, selecting flat minimizers which are with very high probability also global minimizers

View on arXiv
Comments on this paper