ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.03795
  4. Cited By
Heavy Tails in SGD and Compressibility of Overparametrized Neural
  Networks

Heavy Tails in SGD and Compressibility of Overparametrized Neural Networks

7 June 2021
Melih Barsbey
Romain Chor
Murat A. Erdogdu
Gaël Richard
Umut Simsekli
ArXivPDFHTML

Papers citing "Heavy Tails in SGD and Compressibility of Overparametrized Neural Networks"

18 / 18 papers shown
Title
Generalization Guarantees for Multi-View Representation Learning and Application to Regularization via Gaussian Product Mixture Prior
Generalization Guarantees for Multi-View Representation Learning and Application to Regularization via Gaussian Product Mixture Prior
Romain Chor
Abdellatif Zaidi
Piotr Krasnowski
49
0
0
25 Apr 2025
Generalization Guarantees for Representation Learning via Data-Dependent Gaussian Mixture Priors
Generalization Guarantees for Representation Learning via Data-Dependent Gaussian Mixture Priors
Romain Chor
Milad Sefidgaran
Piotr Krasnowski
91
1
0
21 Feb 2025
Nonlinear Stochastic Gradient Descent and Heavy-tailed Noise: A Unified Framework and High-probability Guarantees
Nonlinear Stochastic Gradient Descent and Heavy-tailed Noise: A Unified Framework and High-probability Guarantees
Aleksandar Armacki
Shuhua Yu
Pranay Sharma
Gauri Joshi
Dragana Bajović
D. Jakovetić
S. Kar
57
2
0
17 Oct 2024
Privacy of SGD under Gaussian or Heavy-Tailed Noise: Guarantees without Gradient Clipping
Privacy of SGD under Gaussian or Heavy-Tailed Noise: Guarantees without Gradient Clipping
Umut Simsekli
Mert Gurbuzbalaban
S. Yıldırım
Lingjiong Zhu
38
2
0
04 Mar 2024
Generalization Guarantees via Algorithm-dependent Rademacher Complexity
Generalization Guarantees via Algorithm-dependent Rademacher Complexity
Sarah Sachs
T. Erven
Liam Hodgkinson
Rajiv Khanna
Umut Simsekli
25
3
0
04 Jul 2023
Heavy-Tailed Regularization of Weight Matrices in Deep Neural Networks
Heavy-Tailed Regularization of Weight Matrices in Deep Neural Networks
Xuanzhe Xiao
Zengyi Li
Chuanlong Xie
Fengwei Zhou
23
3
0
06 Apr 2023
Cyclic and Randomized Stepsizes Invoke Heavier Tails in SGD than
  Constant Stepsize
Cyclic and Randomized Stepsizes Invoke Heavier Tails in SGD than Constant Stepsize
Mert Gurbuzbalaban
Yuanhan Hu
Umut Simsekli
Lingjiong Zhu
LRM
23
1
0
10 Feb 2023
Generalization Bounds with Data-dependent Fractal Dimensions
Generalization Bounds with Data-dependent Fractal Dimensions
Benjamin Dupuis
George Deligiannidis
Umut cSimcsekli
AI4CE
39
12
0
06 Feb 2023
Algorithmic Stability of Heavy-Tailed SGD with General Loss Functions
Algorithmic Stability of Heavy-Tailed SGD with General Loss Functions
Anant Raj
Lingjiong Zhu
Mert Gurbuzbalaban
Umut Simsekli
34
15
0
27 Jan 2023
Neural Networks Efficiently Learn Low-Dimensional Representations with
  SGD
Neural Networks Efficiently Learn Low-Dimensional Representations with SGD
Alireza Mousavi-Hosseini
Sejun Park
M. Girotti
Ioannis Mitliagkas
Murat A. Erdogdu
MLT
324
48
0
29 Sep 2022
Algorithmic Stability of Heavy-Tailed Stochastic Gradient Descent on
  Least Squares
Algorithmic Stability of Heavy-Tailed Stochastic Gradient Descent on Least Squares
Anant Raj
Melih Barsbey
Mert Gurbuzbalaban
Lingjiong Zhu
Umut Simsekli
19
9
0
02 Jun 2022
Deep neural networks with dependent weights: Gaussian Process mixture
  limit, heavy tails, sparsity and compressibility
Deep neural networks with dependent weights: Gaussian Process mixture limit, heavy tails, sparsity and compressibility
Hoileong Lee
Fadhel Ayed
Paul Jung
Juho Lee
Hongseok Yang
François Caron
46
10
0
17 May 2022
Heavy-Tail Phenomenon in Decentralized SGD
Heavy-Tail Phenomenon in Decentralized SGD
Mert Gurbuzbalaban
Yuanhan Hu
Umut Simsekli
Kun Yuan
Lingjiong Zhu
38
8
0
13 May 2022
Rate-Distortion Theoretic Generalization Bounds for Stochastic Learning
  Algorithms
Rate-Distortion Theoretic Generalization Bounds for Stochastic Learning Algorithms
Romain Chor
A. Gohari
Gaël Richard
Umut Simsekli
25
23
0
04 Mar 2022
Intrinsic Dimension, Persistent Homology and Generalization in Neural
  Networks
Intrinsic Dimension, Persistent Homology and Generalization in Neural Networks
Tolga Birdal
Aaron Lou
Leonidas J. Guibas
Umut cSimcsekli
30
61
0
25 Nov 2021
What is the State of Neural Network Pruning?
What is the State of Neural Network Pruning?
Davis W. Blalock
Jose Javier Gonzalez Ortiz
Jonathan Frankle
John Guttag
191
1,027
0
06 Mar 2020
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Alex Renda
Jonathan Frankle
Michael Carbin
224
383
0
05 Mar 2020
Norm-Based Capacity Control in Neural Networks
Norm-Based Capacity Control in Neural Networks
Behnam Neyshabur
Ryota Tomioka
Nathan Srebro
127
577
0
27 Feb 2015
1