Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1803.05621
Cited By
Proximal SCOPE for Distributed Sparse Learning: Better Data Partition Implies Faster Convergence Rate
15 March 2018
Shen-Yi Zhao
Gong-Duo Zhang
Ming-Wei Li
Wu-Jun Li
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Proximal SCOPE for Distributed Sparse Learning: Better Data Partition Implies Faster Convergence Rate"
8 / 8 papers shown
Title
On Variance Reduction in Stochastic Gradient Descent and its Asynchronous Variants
Sashank J. Reddi
Ahmed S. Hefny
S. Sra
Barnabás Póczós
Alex Smola
108
196
0
23 Jun 2015
A distributed block coordinate descent method for training
l
1
l_1
l
1
regularized linear classifiers
D. Mahajan
S. Keerthi
S. Sundararajan
166
35
0
18 May 2014
A Proximal Stochastic Gradient Method with Progressive Variance Reduction
Lin Xiao
Tong Zhang
ODL
150
738
0
19 Mar 2014
A Stochastic Quasi-Newton Method for Large-Scale Optimization
R. Byrd
Samantha Hansen
J. Nocedal
Y. Singer
ODL
105
471
0
27 Jan 2014
Minimizing Finite Sums with the Stochastic Average Gradient
Mark Schmidt
Nicolas Le Roux
Francis R. Bach
314
1,245
0
10 Sep 2013
Parallel Coordinate Descent Methods for Big Data Optimization
Peter Richtárik
Martin Takáč
125
487
0
04 Dec 2012
Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization
Shai Shalev-Shwartz
Tong Zhang
176
1,033
0
10 Sep 2012
Parallel Coordinate Descent for L1-Regularized Loss Minimization
Joseph K. Bradley
Aapo Kyrola
Danny Bickson
Carlos Guestrin
95
309
0
26 May 2011
1