Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1309.2375
Cited By
Accelerated Proximal Stochastic Dual Coordinate Ascent for Regularized Loss Minimization
10 September 2013
Shai Shalev-Shwartz
Tong Zhang
ODL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Accelerated Proximal Stochastic Dual Coordinate Ascent for Regularized Loss Minimization"
50 / 61 papers shown
Title
Contractivity and linear convergence in bilinear saddle-point problems: An operator-theoretic approach
Colin Dirren
Mattia Bianchi
Panagiotis D. Grontas
John Lygeros
Florian Dorfler
36
0
0
18 Oct 2024
Faster Linear Systems and Matrix Norm Approximation via Multi-level Sketched Preconditioning
Michal Dereziñski
Christopher Musco
Jiaming Yang
45
2
0
09 May 2024
Multiple Locally Linear Kernel Machines
David Picard
15
1
0
17 Jan 2024
Variance-Reduced Conservative Policy Iteration
Naman Agarwal
Brian Bullins
Karan Singh
29
3
0
12 Dec 2022
RECAPP: Crafting a More Efficient Catalyst for Convex Optimization
Y. Carmon
A. Jambulapati
Yujia Jin
Aaron Sidford
55
11
0
17 Jun 2022
A Stochastic Bundle Method for Interpolating Networks
Alasdair Paren
Leonard Berrada
Rudra P. K. Poudel
M. P. Kumar
24
4
0
29 Jan 2022
On the Complexity of a Practical Primal-Dual Coordinate Method
Ahmet Alacaoglu
V. Cevher
Stephen J. Wright
23
12
0
19 Jan 2022
Adaptive Client Sampling in Federated Learning via Online Learning with Bandit Feedback
Boxin Zhao
Lingxiao Wang
Mladen Kolar
Ziqi Liu
Qing Cui
Jun Zhou
Chaochao Chen
FedML
34
10
0
28 Dec 2021
Structured Convolutional Kernel Networks for Airline Crew Scheduling
Yassine Yaakoubi
F. Soumis
Simon Lacoste-Julien
AI4TS
23
10
0
25 May 2021
First-Order Methods for Convex Optimization
Pavel Dvurechensky
Mathias Staudigl
Shimrit Shtern
ODL
31
25
0
04 Jan 2021
Optimal Client Sampling for Federated Learning
Wenlin Chen
Samuel Horváth
Peter Richtárik
FedML
42
190
0
26 Oct 2020
Lower Bounds and Optimal Algorithms for Personalized Federated Learning
Filip Hanzely
Slavomír Hanzely
Samuel Horváth
Peter Richtárik
FedML
47
186
0
05 Oct 2020
Optimization for Supervised Machine Learning: Randomized Algorithms for Data and Parameters
Filip Hanzely
32
0
0
26 Aug 2020
Variance Reduction via Accelerated Dual Averaging for Finite-Sum Optimization
Chaobing Song
Yong Jiang
Yi Ma
53
23
0
18 Jun 2020
Active Subspace of Neural Networks: Structural Analysis and Universal Attacks
Chunfeng Cui
Kaiqi Zhang
Talgat Daulbaev
Julia Gusak
Ivan V. Oseledets
Zheng-Wei Zhang
AAML
26
25
0
29 Oct 2019
Why gradient clipping accelerates training: A theoretical justification for adaptivity
Junzhe Zhang
Tianxing He
S. Sra
Ali Jadbabaie
30
442
0
28 May 2019
Hybrid Predictive Model: When an Interpretable Model Collaborates with a Black-box Model
Tong Wang
Qihang Lin
38
19
0
10 May 2019
ProxSARAH: An Efficient Algorithmic Framework for Stochastic Composite Nonconvex Optimization
Nhan H. Pham
Lam M. Nguyen
Dzung Phan
Quoc Tran-Dinh
11
139
0
15 Feb 2019
Asynchronous Accelerated Proximal Stochastic Gradient for Strongly Convex Distributed Finite Sums
Hadrien Hendrikx
Francis R. Bach
Laurent Massoulié
FedML
8
26
0
28 Jan 2019
On the Ineffectiveness of Variance Reduced Optimization for Deep Learning
Aaron Defazio
Léon Bottou
UQCV
DRL
23
112
0
11 Dec 2018
Online Adaptive Methods, Universality and Acceleration
Kfir Y. Levy
A. Yurtsever
V. Cevher
ODL
25
88
0
08 Sep 2018
SPIDER: Near-Optimal Non-Convex Optimization via Stochastic Path Integrated Differential Estimator
Cong Fang
C. J. Li
Zhouchen Lin
Tong Zhang
41
570
0
04 Jul 2018
A Simple Stochastic Variance Reduced Algorithm with Fast Convergence Rates
Kaiwen Zhou
Fanhua Shang
James Cheng
14
74
0
28 Jun 2018
On the insufficiency of existing momentum schemes for Stochastic Optimization
Rahul Kidambi
Praneeth Netrapalli
Prateek Jain
Sham Kakade
ODL
22
117
0
15 Mar 2018
Katyusha X: Practical Momentum Method for Stochastic Sum-of-Nonconvex Optimization
Zeyuan Allen-Zhu
ODL
44
52
0
12 Feb 2018
Adaptive Stochastic Dual Coordinate Ascent for Conditional Random Fields
Rémi Le Priol
Alexandre Piché
Simon Lacoste-Julien
26
5
0
22 Dec 2017
Accelerated Gradient Descent Escapes Saddle Points Faster than Gradient Descent
Chi Jin
Praneeth Netrapalli
Michael I. Jordan
ODL
37
261
0
28 Nov 2017
Leverage Score Sampling for Faster Accelerated Regression and ERM
Naman Agarwal
Sham Kakade
Rahul Kidambi
Y. Lee
Praneeth Netrapalli
Aaron Sidford
26
21
0
22 Nov 2017
An inexact subsampled proximal Newton-type method for large-scale machine learning
Xuanqing Liu
Cho-Jui Hsieh
J. Lee
Yuekai Sun
27
15
0
28 Aug 2017
Optimization Methods for Supervised Machine Learning: From Linear Models to Deep Learning
Frank E. Curtis
K. Scheinberg
33
45
0
30 Jun 2017
A Unified Analysis of Stochastic Optimization Methods Using Jump System Theory and Quadratic Constraints
Bin Hu
Peter M. Seiler
Anders Rantzer
27
35
0
25 Jun 2017
Analysis and Optimization of Loss Functions for Multiclass, Top-k, and Multilabel Classification
Maksim Lapin
Matthias Hein
Bernt Schiele
34
99
0
12 Dec 2016
Asynchronous Stochastic Block Coordinate Descent with Variance Reduction
Bin Gu
Zhouyuan Huo
Heng-Chiao Huang
23
10
0
29 Oct 2016
An Inexact Variable Metric Proximal Point Algorithm for Generic Quasi-Newton Acceleration
Hongzhou Lin
Julien Mairal
Zaïd Harchaoui
28
13
0
04 Oct 2016
Less than a Single Pass: Stochastically Controlled Stochastic Gradient Method
Lihua Lei
Michael I. Jordan
26
96
0
12 Sep 2016
On Faster Convergence of Cyclic Block Coordinate Descent-type Methods for Strongly Convex Minimization
Xingguo Li
T. Zhao
R. Arora
Han Liu
Mingyi Hong
21
15
0
10 Jul 2016
Dimension-Free Iteration Complexity of Finite Sum Optimization Problems
Yossi Arjevani
Ohad Shamir
16
24
0
30 Jun 2016
Variance-Reduced Proximal Stochastic Gradient Descent for Non-convex Composite optimization
Xiyu Yu
Dacheng Tao
19
5
0
02 Jun 2016
Tight Complexity Bounds for Optimizing Composite Objectives
Blake E. Woodworth
Nathan Srebro
26
185
0
25 May 2016
Adaptive Newton Method for Empirical Risk Minimization to Statistical Accuracy
Aryan Mokhtari
Alejandro Ribeiro
ODL
17
32
0
24 May 2016
A General Distributed Dual Coordinate Optimization Framework for Regularized Loss Minimization
Shun Zheng
Jialei Wang
Fen Xia
Wenyuan Xu
Tong Zhang
18
22
0
13 Apr 2016
Katyusha: The First Direct Acceleration of Stochastic Gradient Methods
Zeyuan Allen-Zhu
ODL
15
575
0
18 Mar 2016
Variance Reduction for Faster Non-Convex Optimization
Zeyuan Allen-Zhu
Elad Hazan
ODL
16
390
0
17 Mar 2016
On the Influence of Momentum Acceleration on Online Learning
Kun Yuan
Bicheng Ying
Ali H. Sayed
26
58
0
14 Mar 2016
A Simple Practical Accelerated Method for Finite Sums
Aaron Defazio
25
120
0
08 Feb 2016
SCOPE: Scalable Composite Optimization for Learning on Spark
Shen-Yi Zhao
Ru Xiang
Yinghuan Shi
Peng Gao
Wu-Jun Li
16
16
0
30 Jan 2016
Even Faster Accelerated Coordinate Descent Using Non-Uniform Sampling
Zeyuan Allen-Zhu
Zheng Qu
Peter Richtárik
Yang Yuan
44
172
0
30 Dec 2015
L1-Regularized Distributed Optimization: A Communication-Efficient Primal-Dual Framework
Virginia Smith
Simone Forte
Michael I. Jordan
Martin Jaggi
28
28
0
13 Dec 2015
Kalman-based Stochastic Gradient Method with Stop Condition and Insensitivity to Conditioning
V. Patel
15
35
0
03 Dec 2015
Stochastic modified equations and adaptive stochastic gradient algorithms
Qianxiao Li
Cheng Tai
E. Weinan
30
279
0
19 Nov 2015
1
2
Next