ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2011.11884
  4. Cited By
SMG: A Shuffling Gradient-Based Method with Momentum

SMG: A Shuffling Gradient-Based Method with Momentum

24 November 2020
Trang H. Tran
Lam M. Nguyen
Quoc Tran-Dinh
ArXivPDFHTML

Papers citing "SMG: A Shuffling Gradient-Based Method with Momentum"

18 / 18 papers shown
Title
Randomised Splitting Methods and Stochastic Gradient Descent
Randomised Splitting Methods and Stochastic Gradient Descent
Luke Shaw
Peter A. Whalley
47
1
0
05 Apr 2025
A Generalized Version of Chung's Lemma and its Applications
A Generalized Version of Chung's Lemma and its Applications
Li Jiang
Xiao Li
Andre Milzarek
Junwen Qiu
45
1
0
09 Jun 2024
Last Iterate Convergence of Incremental Methods and Applications in
  Continual Learning
Last Iterate Convergence of Incremental Methods and Applications in Continual Learning
Xu Cai
Jelena Diakonikolas
38
5
0
11 Mar 2024
Shuffling Momentum Gradient Algorithm for Convex Optimization
Shuffling Momentum Gradient Algorithm for Convex Optimization
Trang H. Tran
Quoc Tran-Dinh
Lam M. Nguyen
20
1
0
05 Mar 2024
Central Limit Theorem for Two-Timescale Stochastic Approximation with
  Markovian Noise: Theory and Applications
Central Limit Theorem for Two-Timescale Stochastic Approximation with Markovian Noise: Theory and Applications
Jie Hu
Vishwaraj Doshi
Do Young Eun
33
4
0
17 Jan 2024
Enhancing Deep Neural Network Training Efficiency and Performance
  through Linear Prediction
Enhancing Deep Neural Network Training Efficiency and Performance through Linear Prediction
Hejie Ying
Mengmeng Song
Yaohong Tang
S. Xiao
Zimin Xiao
23
8
0
17 Oct 2023
Mini-Batch Optimization of Contrastive Loss
Mini-Batch Optimization of Contrastive Loss
Jaewoong Cho
Kartik K. Sreenivasan
Keon Lee
Kyunghoo Mun
Soheun Yi
Jeong-Gwan Lee
Anna Lee
Jy-yong Sohn
Dimitris Papailiopoulos
Kangwook Lee
SSL
35
7
0
12 Jul 2023
Empirical Risk Minimization with Shuffled SGD: A Primal-Dual Perspective
  and Improved Bounds
Empirical Risk Minimization with Shuffled SGD: A Primal-Dual Perspective and Improved Bounds
Xu Cai
Cheuk Yin Lin
Jelena Diakonikolas
FedML
31
5
0
21 Jun 2023
Gradient Descent-Type Methods: Background and Simple Unified Convergence
  Analysis
Gradient Descent-Type Methods: Background and Simple Unified Convergence Analysis
Quoc Tran-Dinh
Marten van Dijk
34
0
0
19 Dec 2022
Provable Adaptivity of Adam under Non-uniform Smoothness
Provable Adaptivity of Adam under Non-uniform Smoothness
Bohan Wang
Yushun Zhang
Huishuai Zhang
Qi Meng
Ruoyu Sun
Zhirui Ma
Tie-Yan Liu
Zhimin Luo
Wei Chen
15
24
0
21 Aug 2022
On the Convergence to a Global Solution of Shuffling-Type Gradient
  Algorithms
On the Convergence to a Global Solution of Shuffling-Type Gradient Algorithms
Lam M. Nguyen
Trang H. Tran
32
2
0
13 Jun 2022
Sampling without Replacement Leads to Faster Rates in Finite-Sum Minimax
  Optimization
Sampling without Replacement Leads to Faster Rates in Finite-Sum Minimax Optimization
Aniket Das
Bernhard Schölkopf
Michael Muehlebach
19
9
0
07 Jun 2022
Nesterov Accelerated Shuffling Gradient Method for Convex Optimization
Nesterov Accelerated Shuffling Gradient Method for Convex Optimization
Trang H. Tran
K. Scheinberg
Lam M. Nguyen
35
11
0
07 Feb 2022
Finite-Sum Optimization: A New Perspective for Convergence to a Global
  Solution
Finite-Sum Optimization: A New Perspective for Convergence to a Global Solution
Lam M. Nguyen
Trang H. Tran
Marten van Dijk
28
3
0
07 Feb 2022
Distributed Random Reshuffling over Networks
Distributed Random Reshuffling over Networks
Kun-Yen Huang
Xiao Li
Andre Milzarek
Shi Pu
Junwen Qiu
36
11
0
31 Dec 2021
Random-reshuffled SARAH does not need a full gradient computations
Random-reshuffled SARAH does not need a full gradient computations
Aleksandr Beznosikov
Martin Takáč
15
7
0
26 Nov 2021
Permutation-Based SGD: Is Random Optimal?
Permutation-Based SGD: Is Random Optimal?
Shashank Rajput
Kangwook Lee
Dimitris Papailiopoulos
28
14
0
19 Feb 2021
New Convergence Aspects of Stochastic Gradient Algorithms
New Convergence Aspects of Stochastic Gradient Algorithms
Lam M. Nguyen
Phuong Ha Nguyen
Peter Richtárik
K. Scheinberg
Martin Takáč
Marten van Dijk
23
66
0
10 Nov 2018
1