ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.02247
  4. Cited By
Select without Fear: Almost All Mini-Batch Schedules Generalize
  Optimally

Select without Fear: Almost All Mini-Batch Schedules Generalize Optimally

3 May 2023
Konstantinos E. Nikolakakis
Amin Karbasi
Dionysis Kalogerias
ArXivPDFHTML

Papers citing "Select without Fear: Almost All Mini-Batch Schedules Generalize Optimally"

9 / 9 papers shown
Title
Rapid Overfitting of Multi-Pass Stochastic Gradient Descent in Stochastic Convex Optimization
Rapid Overfitting of Multi-Pass Stochastic Gradient Descent in Stochastic Convex Optimization
Shira Vansover-Hager
Tomer Koren
Roi Livni
39
0
0
13 May 2025
Stability-based Generalization Analysis of Randomized Coordinate Descent for Pairwise Learning
Liang Wu
Ruixi Hu
Yunwen Lei
47
0
0
03 Mar 2025
Stability and Generalization for Minibatch SGD and Local SGD
Stability and Generalization for Minibatch SGD and Local SGD
Yunwen Lei
Tao Sun
Mingrui Liu
32
3
0
02 Oct 2023
Repeated Random Sampling for Minimizing the Time-to-Accuracy of Learning
Repeated Random Sampling for Minimizing the Time-to-Accuracy of Learning
Patrik Okanovic
R. Waleffe
Vasilis Mageirakos
Konstantinos E. Nikolakakis
Amin Karbasi
Dionysis Kalogerias
Nezihe Merve Gürel
Theodoros Rekatsinas
DD
45
12
0
28 May 2023
Stability and Generalization Analysis of Gradient Methods for Shallow
  Neural Networks
Stability and Generalization Analysis of Gradient Methods for Shallow Neural Networks
Yunwen Lei
Rong Jin
Yiming Ying
MLT
37
18
0
19 Sep 2022
On Generalization of Decentralized Learning with Separable Data
On Generalization of Decentralized Learning with Separable Data
Hossein Taheri
Christos Thrampoulidis
FedML
27
10
0
15 Sep 2022
Stochastic Training is Not Necessary for Generalization
Stochastic Training is Not Necessary for Generalization
Jonas Geiping
Micah Goldblum
Phillip E. Pope
Michael Moeller
Tom Goldstein
86
72
0
29 Sep 2021
Information-Theoretic Generalization Bounds for SGLD via Data-Dependent
  Estimates
Information-Theoretic Generalization Bounds for SGLD via Data-Dependent Estimates
Jeffrey Negrea
Mahdi Haghifam
Gintare Karolina Dziugaite
Ashish Khisti
Daniel M. Roy
FedML
110
147
0
06 Nov 2019
Stochastic Nonconvex Optimization with Large Minibatches
Stochastic Nonconvex Optimization with Large Minibatches
Weiran Wang
Nathan Srebro
36
26
0
25 Sep 2017
1