Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2111.13322
Cited By
Random-reshuffled SARAH does not need a full gradient computations
26 November 2021
Aleksandr Beznosikov
Martin Takáč
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Random-reshuffled SARAH does not need a full gradient computations"
8 / 8 papers shown
Title
Variance Reduction Methods Do Not Need to Compute Full Gradients: Improved Efficiency through Shuffling
Daniil Medyakov
Gleb Molodtsov
S. Chezhegov
Alexey Rebrikov
Aleksandr Beznosikov
103
0
0
21 Feb 2025
Stochastic optimization with arbitrary recurrent data sampling
William G. Powell
Hanbaek Lyu
37
0
0
15 Jan 2024
Federated Learning with Regularized Client Participation
Grigory Malinovsky
Samuel Horváth
Konstantin Burlachenko
Peter Richtárik
FedML
31
13
0
07 Feb 2023
Versatile Single-Loop Method for Gradient Estimator: First and Second Order Optimality, and its Application to Federated Learning
Kazusato Oko
Shunta Akiyama
Tomoya Murata
Taiji Suzuki
FedML
41
0
0
01 Sep 2022
On the Convergence to a Global Solution of Shuffling-Type Gradient Algorithms
Lam M. Nguyen
Trang H. Tran
32
2
0
13 Jun 2022
New Convergence Aspects of Stochastic Gradient Algorithms
Lam M. Nguyen
Phuong Ha Nguyen
Peter Richtárik
K. Scheinberg
Martin Takáč
Marten van Dijk
23
66
0
10 Nov 2018
Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate
Aryan Mokhtari
Mert Gurbuzbalaban
Alejandro Ribeiro
27
36
0
01 Nov 2016
Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning
Julien Mairal
79
317
0
18 Feb 2014
1