ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1803.05621
  4. Cited By
Proximal SCOPE for Distributed Sparse Learning: Better Data Partition
  Implies Faster Convergence Rate

Proximal SCOPE for Distributed Sparse Learning: Better Data Partition Implies Faster Convergence Rate

15 March 2018
Shen-Yi Zhao
Gong-Duo Zhang
Ming-Wei Li
Wu-Jun Li
ArXivPDFHTML

Papers citing "Proximal SCOPE for Distributed Sparse Learning: Better Data Partition Implies Faster Convergence Rate"

8 / 8 papers shown
Title
On Variance Reduction in Stochastic Gradient Descent and its
  Asynchronous Variants
On Variance Reduction in Stochastic Gradient Descent and its Asynchronous Variants
Sashank J. Reddi
Ahmed S. Hefny
S. Sra
Barnabás Póczós
Alex Smola
111
196
0
23 Jun 2015
A distributed block coordinate descent method for training $l_1$
  regularized linear classifiers
A distributed block coordinate descent method for training l1l_1l1​ regularized linear classifiers
D. Mahajan
S. Keerthi
S. Sundararajan
168
35
0
18 May 2014
A Proximal Stochastic Gradient Method with Progressive Variance
  Reduction
A Proximal Stochastic Gradient Method with Progressive Variance Reduction
Lin Xiao
Tong Zhang
ODL
150
738
0
19 Mar 2014
A Stochastic Quasi-Newton Method for Large-Scale Optimization
A Stochastic Quasi-Newton Method for Large-Scale Optimization
R. Byrd
Samantha Hansen
J. Nocedal
Y. Singer
ODL
105
471
0
27 Jan 2014
Minimizing Finite Sums with the Stochastic Average Gradient
Minimizing Finite Sums with the Stochastic Average Gradient
Mark Schmidt
Nicolas Le Roux
Francis R. Bach
316
1,245
0
10 Sep 2013
Parallel Coordinate Descent Methods for Big Data Optimization
Parallel Coordinate Descent Methods for Big Data Optimization
Peter Richtárik
Martin Takáč
125
487
0
04 Dec 2012
Stochastic Dual Coordinate Ascent Methods for Regularized Loss
  Minimization
Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization
Shai Shalev-Shwartz
Tong Zhang
178
1,033
0
10 Sep 2012
Parallel Coordinate Descent for L1-Regularized Loss Minimization
Parallel Coordinate Descent for L1-Regularized Loss Minimization
Joseph K. Bradley
Aapo Kyrola
Danny Bickson
Carlos Guestrin
95
309
0
26 May 2011
1