ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1901.08669
  4. Cited By
SAGA with Arbitrary Sampling

SAGA with Arbitrary Sampling

24 January 2019
Xun Qian
Zheng Qu
Peter Richtárik
ArXivPDFHTML

Papers citing "SAGA with Arbitrary Sampling"

12 / 12 papers shown
Title
Sarah Frank-Wolfe: Methods for Constrained Optimization with Best Rates
  and Practical Features
Sarah Frank-Wolfe: Methods for Constrained Optimization with Best Rates and Practical Features
Aleksandr Beznosikov
David Dobre
Gauthier Gidel
25
5
0
23 Apr 2023
Single-Call Stochastic Extragradient Methods for Structured Non-monotone
  Variational Inequalities: Improved Analysis under Weaker Conditions
Single-Call Stochastic Extragradient Methods for Structured Non-monotone Variational Inequalities: Improved Analysis under Weaker Conditions
S. Choudhury
Eduard A. Gorbunov
Nicolas Loizou
25
13
0
27 Feb 2023
SARAH-based Variance-reduced Algorithm for Stochastic Finite-sum
  Cocoercive Variational Inequalities
SARAH-based Variance-reduced Algorithm for Stochastic Finite-sum Cocoercive Variational Inequalities
Aleksandr Beznosikov
Alexander Gasnikov
37
2
0
12 Oct 2022
Stochastic Gradient Descent-Ascent: Unified Theory and New Efficient
  Methods
Stochastic Gradient Descent-Ascent: Unified Theory and New Efficient Methods
Aleksandr Beznosikov
Eduard A. Gorbunov
Hugo Berard
Nicolas Loizou
19
48
0
15 Feb 2022
L-SVRG and L-Katyusha with Adaptive Sampling
L-SVRG and L-Katyusha with Adaptive Sampling
Boxin Zhao
Boxiang Lyu
Mladen Kolar
21
3
0
31 Jan 2022
Random-reshuffled SARAH does not need a full gradient computations
Random-reshuffled SARAH does not need a full gradient computations
Aleksandr Beznosikov
Martin Takáč
26
7
0
26 Nov 2021
Optimization for Supervised Machine Learning: Randomized Algorithms for
  Data and Parameters
Optimization for Supervised Machine Learning: Randomized Algorithms for Data and Parameters
Filip Hanzely
32
0
0
26 Aug 2020
Sampling and Update Frequencies in Proximal Variance-Reduced Stochastic
  Gradient Methods
Sampling and Update Frequencies in Proximal Variance-Reduced Stochastic Gradient Methods
Martin Morin
Pontus Giselsson
27
4
0
13 Feb 2020
Variance Reduced Coordinate Descent with Acceleration: New Method With a
  Surprising Application to Finite-Sum Problems
Variance Reduced Coordinate Descent with Acceleration: New Method With a Surprising Application to Finite-Sum Problems
Filip Hanzely
D. Kovalev
Peter Richtárik
35
17
0
11 Feb 2020
Cocoercivity, Smoothness and Bias in Variance-Reduced Stochastic
  Gradient Methods
Cocoercivity, Smoothness and Bias in Variance-Reduced Stochastic Gradient Methods
Martin Morin
Pontus Giselsson
20
2
0
21 Mar 2019
A Proximal Stochastic Gradient Method with Progressive Variance
  Reduction
A Proximal Stochastic Gradient Method with Progressive Variance Reduction
Lin Xiao
Tong Zhang
ODL
90
736
0
19 Mar 2014
Incremental Majorization-Minimization Optimization with Application to
  Large-Scale Machine Learning
Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning
Julien Mairal
79
317
0
18 Feb 2014
1