Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2107.00469
Cited By
Never Go Full Batch (in Stochastic Convex Optimization)
29 June 2021
I Zaghloul Amir
Y. Carmon
Tomer Koren
Roi Livni
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Never Go Full Batch (in Stochastic Convex Optimization)"
13 / 13 papers shown
Title
The Sample Complexity of Gradient Descent in Stochastic Convex Optimization
Roi Livni
MLT
39
1
0
07 Apr 2024
Information Complexity of Stochastic Convex Optimization: Applications to Generalization and Memorization
Idan Attias
Gintare Karolina Dziugaite
Mahdi Haghifam
Roi Livni
Daniel M. Roy
30
6
0
14 Feb 2024
The Sample Complexity Of ERMs In Stochastic Convex Optimization
Dan Carmon
Roi Livni
Amir Yehudayoff
21
3
0
09 Nov 2023
Stability and Generalization for Minibatch SGD and Local SGD
Yunwen Lei
Tao Sun
Mingrui Liu
38
3
0
02 Oct 2023
Select without Fear: Almost All Mini-Batch Schedules Generalize Optimally
Konstantinos E. Nikolakakis
Amin Karbasi
Dionysis Kalogerias
37
5
0
03 May 2023
Information Theoretic Lower Bounds for Information Theoretic Upper Bounds
Roi Livni
24
14
0
09 Feb 2023
Differentially Private Generalized Linear Models Revisited
R. Arora
Raef Bassily
Cristóbal Guzmán
Michael Menart
Enayat Ullah
FedML
28
16
0
06 May 2022
Beyond Lipschitz: Sharp Generalization and Excess Risk Bounds for Full-Batch GD
Konstantinos E. Nikolakakis
Farzin Haddadpour
Amin Karbasi
Dionysios S. Kalogerias
43
17
0
26 Apr 2022
Making Progress Based on False Discoveries
Roi Livni
38
0
0
19 Apr 2022
Thinking Outside the Ball: Optimal Learning with Gradient Descent for Generalized Linear Stochastic Convex Optimization
I Zaghloul Amir
Roi Livni
Nathan Srebro
32
6
0
27 Feb 2022
Black-Box Generalization: Stability of Zeroth-Order Learning
Konstantinos E. Nikolakakis
Farzin Haddadpour
Dionysios S. Kalogerias
Amin Karbasi
MLT
24
2
0
14 Feb 2022
Stochastic Training is Not Necessary for Generalization
Jonas Geiping
Micah Goldblum
Phillip E. Pope
Michael Moeller
Tom Goldstein
89
72
0
29 Sep 2021
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
310
2,892
0
15 Sep 2016
1