Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2111.13322
Cited By
Random-reshuffled SARAH does not need a full gradient computations
26 November 2021
Aleksandr Beznosikov
Martin Takáč
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Random-reshuffled SARAH does not need a full gradient computations"
5 / 5 papers shown
Title
Variance Reduction Methods Do Not Need to Compute Full Gradients: Improved Efficiency through Shuffling
Daniil Medyakov
Gleb Molodtsov
S. Chezhegov
Alexey Rebrikov
Aleksandr Beznosikov
103
0
0
21 Feb 2025
On the Convergence to a Global Solution of Shuffling-Type Gradient Algorithms
Lam M. Nguyen
Trang H. Tran
32
2
0
13 Jun 2022
New Convergence Aspects of Stochastic Gradient Algorithms
Lam M. Nguyen
Phuong Ha Nguyen
Peter Richtárik
K. Scheinberg
Martin Takáč
Marten van Dijk
23
66
0
10 Nov 2018
Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate
Aryan Mokhtari
Mert Gurbuzbalaban
Alejandro Ribeiro
27
36
0
01 Nov 2016
Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning
Julien Mairal
79
317
0
18 Feb 2014
1