ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2104.09342
  4. Cited By
Random Reshuffling with Variance Reduction: New Analysis and Better
  Rates

Random Reshuffling with Variance Reduction: New Analysis and Better Rates

19 April 2021
Grigory Malinovsky
Alibek Sailanbayev
Peter Richtárik
ArXivPDFHTML

Papers citing "Random Reshuffling with Variance Reduction: New Analysis and Better Rates"

7 / 7 papers shown
Title
Random Reshuffling: Simple Analysis with Vast Improvements
Random Reshuffling: Simple Analysis with Vast Improvements
Konstantin Mishchenko
Ahmed Khaled
Peter Richtárik
58
131
0
10 Jun 2020
A Unified Convergence Analysis for Shuffling-Type Gradient Methods
A Unified Convergence Analysis for Shuffling-Type Gradient Methods
Lam M. Nguyen
Quoc Tran-Dinh
Dzung Phan
Phuong Ha Nguyen
Marten van Dijk
63
78
0
19 Feb 2020
SGD without Replacement: Sharper Rates for General Smooth Convex
  Functions
SGD without Replacement: Sharper Rates for General Smooth Convex Functions
Prateek Jain
Dheeraj M. Nagaraj
Praneeth Netrapalli
43
87
0
04 Mar 2019
Don't Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are
  Better Without the Outer Loop
Don't Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are Better Without the Outer Loop
D. Kovalev
Samuel Horváth
Peter Richtárik
65
156
0
24 Jan 2019
Stochastic Learning under Random Reshuffling with Constant Step-sizes
Stochastic Learning under Random Reshuffling with Constant Step-sizes
Bicheng Ying
Kun Yuan
Stefan Vlaski
Ali H. Sayed
40
36
0
21 Mar 2018
SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly
  Convex Composite Objectives
SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives
Aaron Defazio
Francis R. Bach
Simon Lacoste-Julien
ODL
97
1,817
0
01 Jul 2014
Practical recommendations for gradient-based training of deep
  architectures
Practical recommendations for gradient-based training of deep architectures
Yoshua Bengio
3DH
ODL
129
2,195
0
24 Jun 2012
1