ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2107.05074
  4. Cited By
SGD: The Role of Implicit Regularization, Batch-size and Multiple-epochs

SGD: The Role of Implicit Regularization, Batch-size and Multiple-epochs

11 July 2021
Satyen Kale
Ayush Sekhari
Karthik Sridharan
ArXivPDFHTML

Papers citing "SGD: The Role of Implicit Regularization, Batch-size and Multiple-epochs"

9 / 9 papers shown
Title
The Sample Complexity of Gradient Descent in Stochastic Convex
  Optimization
The Sample Complexity of Gradient Descent in Stochastic Convex Optimization
Roi Livni
MLT
37
1
0
07 Apr 2024
Non-Convex Stochastic Composite Optimization with Polyak Momentum
Non-Convex Stochastic Composite Optimization with Polyak Momentum
Yuan Gao
Anton Rodomanov
Sebastian U. Stich
34
6
0
05 Mar 2024
On the Overlooked Structure of Stochastic Gradients
On the Overlooked Structure of Stochastic Gradients
Zeke Xie
Qian-Yuan Tang
Mingming Sun
P. Li
31
6
0
05 Dec 2022
From Gradient Flow on Population Loss to Learning with Stochastic
  Gradient Descent
From Gradient Flow on Population Loss to Learning with Stochastic Gradient Descent
Satyen Kale
Jason D. Lee
Chris De Sa
Ayush Sekhari
Karthik Sridharan
27
4
0
13 Oct 2022
Benign Underfitting of Stochastic Gradient Descent
Benign Underfitting of Stochastic Gradient Descent
Tomer Koren
Roi Livni
Yishay Mansour
Uri Sherman
MLT
20
13
0
27 Feb 2022
Thinking Outside the Ball: Optimal Learning with Gradient Descent for
  Generalized Linear Stochastic Convex Optimization
Thinking Outside the Ball: Optimal Learning with Gradient Descent for Generalized Linear Stochastic Convex Optimization
I Zaghloul Amir
Roi Livni
Nathan Srebro
27
6
0
27 Feb 2022
Remember What You Want to Forget: Algorithms for Machine Unlearning
Remember What You Want to Forget: Algorithms for Machine Unlearning
Ayush Sekhari
Jayadev Acharya
Gautam Kamath
A. Suresh
FedML
MU
26
284
0
04 Mar 2021
DEUP: Direct Epistemic Uncertainty Prediction
DEUP: Direct Epistemic Uncertainty Prediction
Salem Lahlou
Moksh Jain
Hadi Nekoei
V. Butoi
Paul Bertin
Jarrid Rector-Brooks
Maksym Korablyov
Yoshua Bengio
PER
UQLM
UQCV
UD
202
81
0
16 Feb 2021
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
284
2,890
0
15 Sep 2016
1