ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.03128
  4. Cited By
On the Generalization of Models Trained with SGD: Information-Theoretic
  Bounds and Implications

On the Generalization of Models Trained with SGD: Information-Theoretic Bounds and Implications

7 October 2021
Ziqiao Wang
Yongyi Mao
    FedML
    MLT
ArXivPDFHTML

Papers citing "On the Generalization of Models Trained with SGD: Information-Theoretic Bounds and Implications"

24 / 24 papers shown
Title
Generalization Bounds via Conditional $f$-Information
Generalization Bounds via Conditional fff-Information
Ziqiao Wang
Yongyi Mao
FedML
40
1
0
30 Oct 2024
Implicit Regularization of Sharpness-Aware Minimization for
  Scale-Invariant Problems
Implicit Regularization of Sharpness-Aware Minimization for Scale-Invariant Problems
Bingcong Li
Liang Zhang
Niao He
41
3
0
18 Oct 2024
Towards understanding epoch-wise double descent in two-layer linear
  neural networks
Towards understanding epoch-wise double descent in two-layer linear neural networks
Amanda Olmin
Fredrik Lindsten
MLT
27
3
0
13 Jul 2024
How Does Distribution Matching Help Domain Generalization: An
  Information-theoretic Analysis
How Does Distribution Matching Help Domain Generalization: An Information-theoretic Analysis
Yuxin Dong
Tieliang Gong
Hong Chen
Shuangyong Song
Weizhan Zhang
Chen Li
OOD
39
0
0
14 Jun 2024
Error Bounds of Supervised Classification from Information-Theoretic
  Perspective
Error Bounds of Supervised Classification from Information-Theoretic Perspective
Binchuan Qi
Wei Gong
Li Li
21
0
0
07 Jun 2024
Sharpness-Aware Minimization for Evolutionary Feature Construction in
  Regression
Sharpness-Aware Minimization for Evolutionary Feature Construction in Regression
Hengzhe Zhang
Qi Chen
Bing Xue
Wolfgang Banzhaf
Mengjie Zhang
AAML
35
1
0
11 May 2024
Information-Theoretic Generalization Bounds for Deep Neural Networks
Information-Theoretic Generalization Bounds for Deep Neural Networks
Haiyun He
Christina Lee Yu
35
4
0
04 Apr 2024
Revisiting Random Weight Perturbation for Efficiently Improving
  Generalization
Revisiting Random Weight Perturbation for Efficiently Improving Generalization
Tao Li
Qinghua Tao
Weihao Yan
Zehao Lei
Yingwen Wu
Kun Fang
M. He
Xiaolin Huang
AAML
37
5
0
30 Mar 2024
Soften to Defend: Towards Adversarial Robustness via Self-Guided Label
  Refinement
Soften to Defend: Towards Adversarial Robustness via Self-Guided Label Refinement
Daiwei Yu
Zhuorong Li
Lina Wei
Canghong Jin
Yun Zhang
Sixian Chan
35
5
0
14 Mar 2024
Class-wise Generalization Error: an Information-Theoretic Analysis
Class-wise Generalization Error: an Information-Theoretic Analysis
Firas Laakom
Yuheng Bu
M. Gabbouj
17
0
0
05 Jan 2024
Information-Theoretic Generalization Bounds for Transductive Learning and its Applications
Information-Theoretic Generalization Bounds for Transductive Learning and its Applications
Huayi Tang
Yong Liu
57
1
0
08 Nov 2023
Time-Independent Information-Theoretic Generalization Bounds for SGLD
Time-Independent Information-Theoretic Generalization Bounds for SGLD
Futoshi Futami
Masahiro Fujisawa
36
5
0
02 Nov 2023
Sample-Conditioned Hypothesis Stability Sharpens Information-Theoretic
  Generalization Bounds
Sample-Conditioned Hypothesis Stability Sharpens Information-Theoretic Generalization Bounds
Ziqiao Wang
Yongyi Mao
22
5
0
31 Oct 2023
Enhancing Sharpness-Aware Optimization Through Variance Suppression
Enhancing Sharpness-Aware Optimization Through Variance Suppression
Bingcong Li
G. Giannakis
AAML
23
19
0
27 Sep 2023
Understanding the Generalization Ability of Deep Learning Algorithms: A
  Kernelized Renyi's Entropy Perspective
Understanding the Generalization Ability of Deep Learning Algorithms: A Kernelized Renyi's Entropy Perspective
Yuxin Dong
Tieliang Gong
H. Chen
Chen Li
20
4
0
02 May 2023
Over-training with Mixup May Hurt Generalization
Over-training with Mixup May Hurt Generalization
Zixuan Liu
Ziqiao Wang
Hongyu Guo
Yongyi Mao
NoLa
21
11
0
02 Mar 2023
Tighter Information-Theoretic Generalization Bounds from Supersamples
Tighter Information-Theoretic Generalization Bounds from Supersamples
Ziqiao Wang
Yongyi Mao
24
17
0
05 Feb 2023
Limitations of Information-Theoretic Generalization Bounds for Gradient
  Descent Methods in Stochastic Convex Optimization
Limitations of Information-Theoretic Generalization Bounds for Gradient Descent Methods in Stochastic Convex Optimization
Mahdi Haghifam
Borja Rodríguez Gálvez
Ragnar Thobaben
Mikael Skoglund
Daniel M. Roy
Gintare Karolina Dziugaite
22
17
0
27 Dec 2022
Two Facets of SDE Under an Information-Theoretic Lens: Generalization of
  SGD via Training Trajectories and via Terminal States
Two Facets of SDE Under an Information-Theoretic Lens: Generalization of SGD via Training Trajectories and via Terminal States
Ziqiao Wang
Yongyi Mao
24
10
0
19 Nov 2022
Information-Theoretic Analysis of Unsupervised Domain Adaptation
Information-Theoretic Analysis of Unsupervised Domain Adaptation
Ziqiao Wang
Yongyi Mao
38
11
0
03 Oct 2022
Towards Understanding Sharpness-Aware Minimization
Towards Understanding Sharpness-Aware Minimization
Maksym Andriushchenko
Nicolas Flammarion
AAML
24
133
0
13 Jun 2022
Information-Theoretic Generalization Bounds for SGLD via Data-Dependent
  Estimates
Information-Theoretic Generalization Bounds for SGLD via Data-Dependent Estimates
Jeffrey Negrea
Mahdi Haghifam
Gintare Karolina Dziugaite
Ashish Khisti
Daniel M. Roy
FedML
110
146
0
06 Nov 2019
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
281
2,889
0
15 Sep 2016
Norm-Based Capacity Control in Neural Networks
Norm-Based Capacity Control in Neural Networks
Behnam Neyshabur
Ryota Tomioka
Nathan Srebro
119
577
0
27 Feb 2015
1