ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1704.04966
  4. Cited By
Larger is Better: The Effect of Learning Rates Enjoyed by Stochastic
  Optimization with Progressive Variance Reduction

Larger is Better: The Effect of Learning Rates Enjoyed by Stochastic Optimization with Progressive Variance Reduction

17 April 2017
Fanhua Shang
ArXivPDFHTML

Papers citing "Larger is Better: The Effect of Learning Rates Enjoyed by Stochastic Optimization with Progressive Variance Reduction"

3 / 3 papers shown
Title
Homotopy Smoothing for Non-Smooth Problems with Lower Complexity than
  $O(1/ε)$
Homotopy Smoothing for Non-Smooth Problems with Lower Complexity than O(1/ε)O(1/ε)O(1/ε)
Yi Tian Xu
Yan Yan
Qihang Lin
Tianbao Yang
52
25
0
13 Jul 2016
A Proximal Stochastic Gradient Method with Progressive Variance
  Reduction
A Proximal Stochastic Gradient Method with Progressive Variance Reduction
Lin Xiao
Tong Zhang
ODL
93
737
0
19 Mar 2014
Stochastic Gradient Descent for Non-smooth Optimization: Convergence
  Results and Optimal Averaging Schemes
Stochastic Gradient Descent for Non-smooth Optimization: Convergence Results and Optimal Averaging Schemes
Ohad Shamir
Tong Zhang
101
571
0
08 Dec 2012
1