ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1412.8060
  4. Cited By
Coordinate Descent with Arbitrary Sampling I: Algorithms and Complexity

Coordinate Descent with Arbitrary Sampling I: Algorithms and Complexity

27 December 2014
Zheng Qu
Peter Richtárik
ArXivPDFHTML

Papers citing "Coordinate Descent with Arbitrary Sampling I: Algorithms and Complexity"

18 / 18 papers shown
Title
Single-Call Stochastic Extragradient Methods for Structured Non-monotone
  Variational Inequalities: Improved Analysis under Weaker Conditions
Single-Call Stochastic Extragradient Methods for Structured Non-monotone Variational Inequalities: Improved Analysis under Weaker Conditions
S. Choudhury
Eduard A. Gorbunov
Nicolas Loizou
25
13
0
27 Feb 2023
GradSkip: Communication-Accelerated Local Gradient Methods with Better
  Computational Complexity
GradSkip: Communication-Accelerated Local Gradient Methods with Better Computational Complexity
A. Maranjyan
M. Safaryan
Peter Richtárik
34
13
0
28 Oct 2022
Stochastic Extragradient: General Analysis and Improved Rates
Stochastic Extragradient: General Analysis and Improved Rates
Eduard A. Gorbunov
Hugo Berard
Gauthier Gidel
Nicolas Loizou
19
40
0
16 Nov 2021
Optimization for Supervised Machine Learning: Randomized Algorithms for
  Data and Parameters
Optimization for Supervised Machine Learning: Randomized Algorithms for Data and Parameters
Filip Hanzely
32
0
0
26 Aug 2020
Sampling and Update Frequencies in Proximal Variance-Reduced Stochastic
  Gradient Methods
Sampling and Update Frequencies in Proximal Variance-Reduced Stochastic Gradient Methods
Martin Morin
Pontus Giselsson
27
4
0
13 Feb 2020
99% of Distributed Optimization is a Waste of Time: The Issue and How to
  Fix it
99% of Distributed Optimization is a Waste of Time: The Issue and How to Fix it
Konstantin Mishchenko
Filip Hanzely
Peter Richtárik
16
13
0
27 Jan 2019
SEGA: Variance Reduction via Gradient Sketching
SEGA: Variance Reduction via Gradient Sketching
Filip Hanzely
Konstantin Mishchenko
Peter Richtárik
25
71
0
09 Sep 2018
Momentum and Stochastic Momentum for Stochastic Gradient, Newton,
  Proximal Point and Subspace Descent Methods
Momentum and Stochastic Momentum for Stochastic Gradient, Newton, Proximal Point and Subspace Descent Methods
Nicolas Loizou
Peter Richtárik
19
199
0
27 Dec 2017
Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling
  and Imaging Applications
Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Applications
A. Chambolle
Matthias Joachim Ehrhardt
Peter Richtárik
Carola-Bibiane Schönlieb
32
184
0
15 Jun 2017
Faster Coordinate Descent via Adaptive Importance Sampling
Faster Coordinate Descent via Adaptive Importance Sampling
Dmytro Perekrestenko
V. Cevher
Martin Jaggi
21
42
0
07 Mar 2017
Federated Learning: Strategies for Improving Communication Efficiency
Federated Learning: Strategies for Improving Communication Efficiency
Jakub Konecný
H. B. McMahan
Felix X. Yu
Peter Richtárik
A. Suresh
Dave Bacon
FedML
16
4,588
0
18 Oct 2016
Even Faster Accelerated Coordinate Descent Using Non-Uniform Sampling
Even Faster Accelerated Coordinate Descent Using Non-Uniform Sampling
Zeyuan Allen-Zhu
Zheng Qu
Peter Richtárik
Yang Yuan
44
172
0
30 Dec 2015
Mini-Batch Semi-Stochastic Gradient Descent in the Proximal Setting
Mini-Batch Semi-Stochastic Gradient Descent in the Proximal Setting
Jakub Konecný
Jie Liu
Peter Richtárik
Martin Takáč
ODL
25
273
0
16 Apr 2015
Stochastic Dual Coordinate Ascent with Adaptive Probabilities
Stochastic Dual Coordinate Ascent with Adaptive Probabilities
Dominik Csiba
Zheng Qu
Peter Richtárik
ODL
58
97
0
27 Feb 2015
Adding vs. Averaging in Distributed Primal-Dual Optimization
Adding vs. Averaging in Distributed Primal-Dual Optimization
Chenxin Ma
Virginia Smith
Martin Jaggi
Michael I. Jordan
Peter Richtárik
Martin Takáč
FedML
21
176
0
12 Feb 2015
Coordinate Descent with Arbitrary Sampling II: Expected Separable
  Overapproximation
Coordinate Descent with Arbitrary Sampling II: Expected Separable Overapproximation
Zheng Qu
Peter Richtárik
33
83
0
27 Dec 2014
A Proximal Stochastic Gradient Method with Progressive Variance
  Reduction
A Proximal Stochastic Gradient Method with Progressive Variance Reduction
Lin Xiao
Tong Zhang
ODL
84
736
0
19 Mar 2014
Incremental Majorization-Minimization Optimization with Application to
  Large-Scale Machine Learning
Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning
Julien Mairal
79
317
0
18 Feb 2014
1