ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1512.09103
  4. Cited By
Even Faster Accelerated Coordinate Descent Using Non-Uniform Sampling

Even Faster Accelerated Coordinate Descent Using Non-Uniform Sampling

30 December 2015
Zeyuan Allen-Zhu
Zheng Qu
Peter Richtárik
Yang Yuan
ArXivPDFHTML

Papers citing "Even Faster Accelerated Coordinate Descent Using Non-Uniform Sampling"

28 / 28 papers shown
Title
Randomized Pairwise Learning with Adaptive Sampling: A PAC-Bayes Analysis
Randomized Pairwise Learning with Adaptive Sampling: A PAC-Bayes Analysis
Sijia Zhou
Yunwen Lei
Ata Kabán
34
0
0
03 Apr 2025
A stochastic gradient descent algorithm with random search directions
A stochastic gradient descent algorithm with random search directions
Eméric Gbaguidi
ODL
80
0
0
25 Mar 2025
An Optimal Algorithm for Strongly Convex Min-min Optimization
An Optimal Algorithm for Strongly Convex Min-min Optimization
Alexander Gasnikov
D. Kovalev
Grigory Malinovsky
24
1
0
29 Dec 2022
Information FOMO: The unhealthy fear of missing out on information. A
  method for removing misleading data for healthier models
Information FOMO: The unhealthy fear of missing out on information. A method for removing misleading data for healthier models
Ethan Pickering
T. Sapsis
24
6
0
27 Aug 2022
L-SVRG and L-Katyusha with Adaptive Sampling
L-SVRG and L-Katyusha with Adaptive Sampling
Boxin Zhao
Boxiang Lyu
Mladen Kolar
21
3
0
31 Jan 2022
A dual approach for federated learning
A dual approach for federated learning
Zhenan Fan
Huang Fang
M. Friedlander
FedML
16
3
0
26 Jan 2022
Adaptive Client Sampling in Federated Learning via Online Learning with Bandit Feedback
Adaptive Client Sampling in Federated Learning via Online Learning with Bandit Feedback
Boxin Zhao
Lingxiao Wang
Mladen Kolar
Ziqi Liu
Qing Cui
Jun Zhou
Chaochao Chen
FedML
34
10
0
28 Dec 2021
Cyclic Coordinate Dual Averaging with Extrapolation
Cyclic Coordinate Dual Averaging with Extrapolation
Chaobing Song
Jelena Diakonikolas
27
6
0
26 Feb 2021
Personalized Federated Learning: A Unified Framework and Universal
  Optimization Techniques
Personalized Federated Learning: A Unified Framework and Universal Optimization Techniques
Filip Hanzely
Boxin Zhao
Mladen Kolar
FedML
27
52
0
19 Feb 2021
First-Order Methods for Convex Optimization
First-Order Methods for Convex Optimization
Pavel Dvurechensky
Mathias Staudigl
Shimrit Shtern
ODL
26
25
0
04 Jan 2021
Optimal Client Sampling for Federated Learning
Optimal Client Sampling for Federated Learning
Wenlin Chen
Samuel Horváth
Peter Richtárik
FedML
28
190
0
26 Oct 2020
Optimization for Supervised Machine Learning: Randomized Algorithms for
  Data and Parameters
Optimization for Supervised Machine Learning: Randomized Algorithms for Data and Parameters
Filip Hanzely
32
0
0
26 Aug 2020
Adaptive Task Sampling for Meta-Learning
Adaptive Task Sampling for Meta-Learning
Chenghao Liu
Zhihao Wang
Doyen Sahoo
Yuan Fang
Anton van den Hengel
S. Hoi
20
54
0
17 Jul 2020
Acceleration for Compressed Gradient Descent in Distributed and
  Federated Optimization
Acceleration for Compressed Gradient Descent in Distributed and Federated Optimization
Zhize Li
D. Kovalev
Xun Qian
Peter Richtárik
FedML
AI4CE
21
133
0
26 Feb 2020
Variance Reduced Coordinate Descent with Acceleration: New Method With a
  Surprising Application to Finite-Sum Problems
Variance Reduced Coordinate Descent with Acceleration: New Method With a Surprising Application to Finite-Sum Problems
Filip Hanzely
D. Kovalev
Peter Richtárik
35
17
0
11 Feb 2020
Convergence Analysis of Block Coordinate Algorithms with Determinantal
  Sampling
Convergence Analysis of Block Coordinate Algorithms with Determinantal Sampling
Mojmír Mutný
Michal Derezinski
Andreas Krause
30
20
0
25 Oct 2019
The Practicality of Stochastic Optimization in Imaging Inverse Problems
The Practicality of Stochastic Optimization in Imaging Inverse Problems
Junqi Tang
K. Egiazarian
Mohammad Golbabaee
Mike Davies
25
30
0
22 Oct 2019
Accelerated Decentralized Optimization with Local Updates for Smooth and
  Strongly Convex Objectives
Accelerated Decentralized Optimization with Local Updates for Smooth and Strongly Convex Objectives
Hadrien Hendrikx
Francis R. Bach
Laurent Massoulié
13
42
0
05 Oct 2018
SEGA: Variance Reduction via Gradient Sketching
SEGA: Variance Reduction via Gradient Sketching
Filip Hanzely
Konstantin Mishchenko
Peter Richtárik
25
71
0
09 Sep 2018
Momentum and Stochastic Momentum for Stochastic Gradient, Newton,
  Proximal Point and Subspace Descent Methods
Momentum and Stochastic Momentum for Stochastic Gradient, Newton, Proximal Point and Subspace Descent Methods
Nicolas Loizou
Peter Richtárik
19
199
0
27 Dec 2017
Leverage Score Sampling for Faster Accelerated Regression and ERM
Leverage Score Sampling for Faster Accelerated Regression and ERM
Naman Agarwal
Sham Kakade
Rahul Kidambi
Y. Lee
Praneeth Netrapalli
Aaron Sidford
24
21
0
22 Nov 2017
Safe Adaptive Importance Sampling
Safe Adaptive Importance Sampling
Sebastian U. Stich
Anant Raj
Martin Jaggi
27
53
0
07 Nov 2017
Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling
  and Imaging Applications
Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Applications
A. Chambolle
Matthias Joachim Ehrhardt
Peter Richtárik
Carola-Bibiane Schönlieb
29
184
0
15 Jun 2017
Faster Coordinate Descent via Adaptive Importance Sampling
Faster Coordinate Descent via Adaptive Importance Sampling
Dmytro Perekrestenko
V. Cevher
Martin Jaggi
19
42
0
07 Mar 2017
Faster Principal Component Regression and Stable Matrix Chebyshev
  Approximation
Faster Principal Component Regression and Stable Matrix Chebyshev Approximation
Zeyuan Allen-Zhu
Yuanzhi Li
19
20
0
16 Aug 2016
Katyusha: The First Direct Acceleration of Stochastic Gradient Methods
Katyusha: The First Direct Acceleration of Stochastic Gradient Methods
Zeyuan Allen-Zhu
ODL
15
575
0
18 Mar 2016
Variance Reduction for Faster Non-Convex Optimization
Variance Reduction for Faster Non-Convex Optimization
Zeyuan Allen-Zhu
Elad Hazan
ODL
16
390
0
17 Mar 2016
A Differential Equation for Modeling Nesterov's Accelerated Gradient
  Method: Theory and Insights
A Differential Equation for Modeling Nesterov's Accelerated Gradient Method: Theory and Insights
Weijie Su
Stephen P. Boyd
Emmanuel J. Candes
108
1,154
0
04 Mar 2015
1