ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1403.4699
  4. Cited By
A Proximal Stochastic Gradient Method with Progressive Variance
  Reduction

A Proximal Stochastic Gradient Method with Progressive Variance Reduction

19 March 2014
Lin Xiao
Tong Zhang
    ODL
ArXivPDFHTML

Papers citing "A Proximal Stochastic Gradient Method with Progressive Variance Reduction"

14 / 114 papers shown
Title
MAGMA: Multi-level accelerated gradient mirror descent algorithm for
  large-scale convex composite minimization
MAGMA: Multi-level accelerated gradient mirror descent algorithm for large-scale convex composite minimization
Vahan Hovhannisyan
P. Parpas
S. Zafeiriou
9
27
0
18 Sep 2015
On Variance Reduction in Stochastic Gradient Descent and its
  Asynchronous Variants
On Variance Reduction in Stochastic Gradient Descent and its Asynchronous Variants
Sashank J. Reddi
Ahmed S. Hefny
S. Sra
Barnabás Póczós
Alex Smola
38
194
0
23 Jun 2015
Improved SVRG for Non-Strongly-Convex or Sum-of-Non-Convex Objectives
Improved SVRG for Non-Strongly-Convex or Sum-of-Non-Convex Objectives
Zeyuan Allen-Zhu
Yang Yuan
26
195
0
05 Jun 2015
Towards stability and optimality in stochastic gradient descent
Towards stability and optimality in stochastic gradient descent
Panos Toulis
Dustin Tran
E. Airoldi
21
56
0
10 May 2015
Mini-Batch Semi-Stochastic Gradient Descent in the Proximal Setting
Mini-Batch Semi-Stochastic Gradient Descent in the Proximal Setting
Jakub Konecný
Jie Liu
Peter Richtárik
Martin Takáč
ODL
28
273
0
16 Apr 2015
Non-Uniform Stochastic Average Gradient Method for Training Conditional
  Random Fields
Non-Uniform Stochastic Average Gradient Method for Training Conditional Random Fields
Mark W. Schmidt
Reza Babanezhad
Mohamed Osama Ahmed
Aaron Defazio
Ann Clifton
Anoop Sarkar
35
83
0
16 Apr 2015
Stochastic Dual Coordinate Ascent with Adaptive Probabilities
Stochastic Dual Coordinate Ascent with Adaptive Probabilities
Dominik Csiba
Zheng Qu
Peter Richtárik
ODL
61
97
0
27 Feb 2015
Communication-Efficient Distributed Optimization of Self-Concordant
  Empirical Loss
Communication-Efficient Distributed Optimization of Self-Concordant Empirical Loss
Yuchen Zhang
Lin Xiao
38
72
0
01 Jan 2015
Randomized Dual Coordinate Ascent with Arbitrary Sampling
Randomized Dual Coordinate Ascent with Arbitrary Sampling
Zheng Qu
Peter Richtárik
Tong Zhang
35
58
0
21 Nov 2014
Local Rademacher Complexity for Multi-label Learning
Local Rademacher Complexity for Multi-label Learning
Chang Xu
Tongliang Liu
Dacheng Tao
Chao Xu
35
81
0
26 Oct 2014
Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk
  Minimization
Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization
Yuchen Zhang
Xiao Lin
43
261
0
10 Sep 2014
On Data Preconditioning for Regularized Loss Minimization
On Data Preconditioning for Regularized Loss Minimization
Tianbao Yang
R. L. Jin
Shenghuo Zhu
Qihang Lin
48
9
0
13 Aug 2014
Semi-Stochastic Gradient Descent Methods
Semi-Stochastic Gradient Descent Methods
Jakub Konecný
Peter Richtárik
ODL
60
237
0
05 Dec 2013
Minimizing Finite Sums with the Stochastic Average Gradient
Minimizing Finite Sums with the Stochastic Average Gradient
Mark W. Schmidt
Nicolas Le Roux
Francis R. Bach
79
1,243
0
10 Sep 2013
Previous
123