ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.00425
  4. Cited By
Momentum-based variance-reduced proximal stochastic gradient method for
  composite nonconvex stochastic optimization

Momentum-based variance-reduced proximal stochastic gradient method for composite nonconvex stochastic optimization

31 May 2020
Yangyang Xu
Yibo Xu
ArXivPDFHTML

Papers citing "Momentum-based variance-reduced proximal stochastic gradient method for composite nonconvex stochastic optimization"

4 / 4 papers shown
Title
Non-Convex Stochastic Composite Optimization with Polyak Momentum
Non-Convex Stochastic Composite Optimization with Polyak Momentum
Yuan Gao
Anton Rodomanov
Sebastian U. Stich
39
6
0
05 Mar 2024
Variance-reduced accelerated methods for decentralized stochastic
  double-regularized nonconvex strongly-concave minimax problems
Variance-reduced accelerated methods for decentralized stochastic double-regularized nonconvex strongly-concave minimax problems
Gabriel Mancino-Ball
Yangyang Xu
20
8
0
14 Jul 2023
Stochastic Inexact Augmented Lagrangian Method for Nonconvex Expectation
  Constrained Optimization
Stochastic Inexact Augmented Lagrangian Method for Nonconvex Expectation Constrained Optimization
Zichong Li
Pinzhuo Chen
Sijia Liu
Songtao Lu
Yangyang Xu
35
17
0
19 Dec 2022
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
308
2,890
0
15 Sep 2016
1