ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2112.10852
  4. Cited By
The effective noise of Stochastic Gradient Descent

The effective noise of Stochastic Gradient Descent

20 December 2021
Francesca Mignacco
Pierfrancesco Urbani
ArXivPDFHTML

Papers citing "The effective noise of Stochastic Gradient Descent"

9 / 9 papers shown
Title
Convergence, Sticking and Escape: Stochastic Dynamics Near Critical Points in SGD
Convergence, Sticking and Escape: Stochastic Dynamics Near Critical Points in SGD
Dmitry Dudukalov
Artem Logachov
Vladimir Lotov
Timofei Prasolov
Evgeny Prokopenko
Anton Tarasenko
42
0
0
24 May 2025
Deep Linear Network Training Dynamics from Random Initialization: Data, Width, Depth, and Hyperparameter Transfer
Deep Linear Network Training Dynamics from Random Initialization: Data, Width, Depth, and Hyperparameter Transfer
Blake Bordelon
Cengiz Pehlevan
AI4CE
153
1
0
04 Feb 2025
Connecting NTK and NNGP: A Unified Theoretical Framework for Wide Neural Network Learning Dynamics
Connecting NTK and NNGP: A Unified Theoretical Framework for Wide Neural Network Learning Dynamics
Yehonatan Avidan
Qianyi Li
H. Sompolinsky
92
8
0
08 Sep 2023
Stochasticity helps to navigate rough landscapes: comparing
  gradient-descent-based algorithms in the phase retrieval problem
Stochasticity helps to navigate rough landscapes: comparing gradient-descent-based algorithms in the phase retrieval problem
Francesca Mignacco
Pierfrancesco Urbani
Lenka Zdeborová
62
36
0
08 Mar 2021
Poly-time universality and limitations of deep learning
Poly-time universality and limitations of deep learning
Emmanuel Abbe
Colin Sandon
40
23
0
07 Jan 2020
Tencent ML-Images: A Large-Scale Multi-Label Image Database for Visual
  Representation Learning
Tencent ML-Images: A Large-Scale Multi-Label Image Database for Visual Representation Learning
Baoyuan Wu
Weidong Chen
Yanbo Fan
Yong Zhang
Jinlong Hou
Jie Liu
Tong Zhang
VLM
MLLM
62
86
0
07 Jan 2019
A Mean Field View of the Landscape of Two-Layers Neural Networks
A Mean Field View of the Landscape of Two-Layers Neural Networks
Song Mei
Andrea Montanari
Phan-Minh Nguyen
MLT
91
858
0
18 Apr 2018
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
424
2,937
0
15 Sep 2016
Representation Learning: A Review and New Perspectives
Representation Learning: A Review and New Perspectives
Yoshua Bengio
Aaron Courville
Pascal Vincent
OOD
SSL
256
12,439
0
24 Jun 2012
1