ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.11361
  4. Cited By
Chaotic Regularization and Heavy-Tailed Limits for Deterministic
  Gradient Descent

Chaotic Regularization and Heavy-Tailed Limits for Deterministic Gradient Descent

23 May 2022
S. H. Lim
Yijun Wan
Umut cSimcsekli
ArXivPDFHTML

Papers citing "Chaotic Regularization and Heavy-Tailed Limits for Deterministic Gradient Descent"

7 / 7 papers shown
Title
Generalization Guarantees for Multi-View Representation Learning and Application to Regularization via Gaussian Product Mixture Prior
Generalization Guarantees for Multi-View Representation Learning and Application to Regularization via Gaussian Product Mixture Prior
Milad Sefidgaran
Abdellatif Zaidi
Piotr Krasnowski
46
0
0
25 Apr 2025
Generalization Guarantees for Representation Learning via Data-Dependent Gaussian Mixture Priors
Generalization Guarantees for Representation Learning via Data-Dependent Gaussian Mixture Priors
Milad Sefidgaran
A. Zaidi
Piotr Krasnowski
91
1
0
21 Feb 2025
Privacy of SGD under Gaussian or Heavy-Tailed Noise: Guarantees without Gradient Clipping
Privacy of SGD under Gaussian or Heavy-Tailed Noise: Guarantees without Gradient Clipping
Umut Simsekli
Mert Gurbuzbalaban
S. Yıldırım
Lingjiong Zhu
38
2
0
04 Mar 2024
From Stability to Chaos: Analyzing Gradient Descent Dynamics in
  Quadratic Regression
From Stability to Chaos: Analyzing Gradient Descent Dynamics in Quadratic Regression
Xuxing Chen
Krishnakumar Balasubramanian
Promit Ghosal
Bhavya Agrawalla
33
7
0
02 Oct 2023
Algorithmic Stability of Heavy-Tailed SGD with General Loss Functions
Algorithmic Stability of Heavy-Tailed SGD with General Loss Functions
Anant Raj
Lingjiong Zhu
Mert Gurbuzbalaban
Umut Simsekli
31
15
0
27 Jan 2023
Stochastic Training is Not Necessary for Generalization
Stochastic Training is Not Necessary for Generalization
Jonas Geiping
Micah Goldblum
Phillip E. Pope
Michael Moeller
Tom Goldstein
89
72
0
29 Sep 2021
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
308
2,890
0
15 Sep 2016
1