ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2111.13322
  4. Cited By
Random-reshuffled SARAH does not need a full gradient computations

Random-reshuffled SARAH does not need a full gradient computations

26 November 2021
Aleksandr Beznosikov
Martin Takáč
ArXivPDFHTML

Papers citing "Random-reshuffled SARAH does not need a full gradient computations"

5 / 5 papers shown
Title
Variance Reduction Methods Do Not Need to Compute Full Gradients: Improved Efficiency through Shuffling
Variance Reduction Methods Do Not Need to Compute Full Gradients: Improved Efficiency through Shuffling
Daniil Medyakov
Gleb Molodtsov
S. Chezhegov
Alexey Rebrikov
Aleksandr Beznosikov
103
0
0
21 Feb 2025
On the Convergence to a Global Solution of Shuffling-Type Gradient
  Algorithms
On the Convergence to a Global Solution of Shuffling-Type Gradient Algorithms
Lam M. Nguyen
Trang H. Tran
32
2
0
13 Jun 2022
New Convergence Aspects of Stochastic Gradient Algorithms
New Convergence Aspects of Stochastic Gradient Algorithms
Lam M. Nguyen
Phuong Ha Nguyen
Peter Richtárik
K. Scheinberg
Martin Takáč
Marten van Dijk
23
66
0
10 Nov 2018
Surpassing Gradient Descent Provably: A Cyclic Incremental Method with
  Linear Convergence Rate
Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate
Aryan Mokhtari
Mert Gurbuzbalaban
Alejandro Ribeiro
27
36
0
01 Nov 2016
Incremental Majorization-Minimization Optimization with Application to
  Large-Scale Machine Learning
Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning
Julien Mairal
79
317
0
18 Feb 2014
1