ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1906.10822
  4. Cited By
Gradient Noise Convolution (GNC): Smoothing Loss Function for
  Distributed Large-Batch SGD

Gradient Noise Convolution (GNC): Smoothing Loss Function for Distributed Large-Batch SGD

26 June 2019
Kosuke Haruki
Taiji Suzuki
Yohei Hamakawa
Takeshi Toda
Ryuji Sakai
M. Ozawa
Mitsuhiro Kimura
    ODL
ArXivPDFHTML

Papers citing "Gradient Noise Convolution (GNC): Smoothing Loss Function for Distributed Large-Batch SGD"

5 / 5 papers shown
Title
Evolutionary algorithms as an alternative to backpropagation for
  supervised training of Biophysical Neural Networks and Neural ODEs
Evolutionary algorithms as an alternative to backpropagation for supervised training of Biophysical Neural Networks and Neural ODEs
James Hazelden
Yuhan Helena Liu
Eli Shlizerman
E. Shea-Brown
52
2
0
17 Nov 2023
Towards Understanding Sharpness-Aware Minimization
Towards Understanding Sharpness-Aware Minimization
Maksym Andriushchenko
Nicolas Flammarion
AAML
35
134
0
13 Jun 2022
Tackling benign nonconvexity with smoothing and stochastic gradients
Tackling benign nonconvexity with smoothing and stochastic gradients
Harsh Vardhan
Sebastian U. Stich
31
8
0
18 Feb 2022
Low-Pass Filtering SGD for Recovering Flat Optima in the Deep Learning
  Optimization Landscape
Low-Pass Filtering SGD for Recovering Flat Optima in the Deep Learning Optimization Landscape
Devansh Bisla
Jing Wang
A. Choromańska
27
34
0
20 Jan 2022
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
310
2,896
0
15 Sep 2016
1