ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1407.0202
  4. Cited By
SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly
  Convex Composite Objectives

SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives

1 July 2014
Aaron Defazio
Francis R. Bach
Simon Lacoste-Julien
    ODL
ArXivPDFHTML

Papers citing "SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives"

24 / 24 papers shown
Title
HOME-3: High-Order Momentum Estimator with Third-Power Gradient for Convex and Smooth Nonconvex Optimization
HOME-3: High-Order Momentum Estimator with Third-Power Gradient for Convex and Smooth Nonconvex Optimization
Wei Zhang
Arif Hassan Zidan
Afrar Jahin
Wei Zhang
Tianming Liu
ODL
53
0
0
16 May 2025
Personalized Federated Learning under Model Dissimilarity Constraints
Personalized Federated Learning under Model Dissimilarity Constraints
Samuel Erickson
Mikael Johansson
FedML
210
0
0
12 May 2025
Optimizing Chain-of-Thought Reasoners via Gradient Variance Minimization in Rejection Sampling and RL
Optimizing Chain-of-Thought Reasoners via Gradient Variance Minimization in Rejection Sampling and RL
Jiarui Yao
Yifan Hao
Hanning Zhang
Hanze Dong
Wei Xiong
Nan Jiang
Tong Zhang
LRM
100
1
0
05 May 2025
Revisiting LocalSGD and SCAFFOLD: Improved Rates and Missing Analysis
Revisiting LocalSGD and SCAFFOLD: Improved Rates and Missing Analysis
Ruichen Luo
Sebastian U Stich
Samuel Horváth
Martin Takáč
85
0
0
08 Jan 2025
Efficient Optimization Algorithms for Linear Adversarial Training
Efficient Optimization Algorithms for Linear Adversarial Training
Antônio H. Ribeiro
Thomas B. Schon
Dave Zahariah
Francis Bach
AAML
70
1
0
16 Oct 2024
Stochastic variance-reduced Gaussian variational inference on the Bures-Wasserstein manifold
Stochastic variance-reduced Gaussian variational inference on the Bures-Wasserstein manifold
Hoang Phuc Hau Luu
Hanlin Yu
Bernardo Williams
Marcelo Hartmann
Arto Klami
DRL
106
0
0
03 Oct 2024
Ordered Momentum for Asynchronous SGD
Ordered Momentum for Asynchronous SGD
Chang-Wei Shi
Yi-Rui Yang
Wu-Jun Li
ODL
123
0
0
27 Jul 2024
Relax and penalize: a new bilevel approach to mixed-binary hyperparameter optimization
Relax and penalize: a new bilevel approach to mixed-binary hyperparameter optimization
M. D. Santis
Jordan Frécon
Francesco Rinaldi
Saverio Salzo
Martin Schmidt
Martin Schmidt
73
0
0
21 Aug 2023
Estimate-Then-Optimize versus Integrated-Estimation-Optimization versus Sample Average Approximation: A Stochastic Dominance Perspective
Estimate-Then-Optimize versus Integrated-Estimation-Optimization versus Sample Average Approximation: A Stochastic Dominance Perspective
Adam N. Elmachtoub
Henry Lam
Haofeng Zhang
Yunfan Zhao
122
6
0
13 Apr 2023
Randomized Block-Coordinate Optimistic Gradient Algorithms for Root-Finding Problems
Randomized Block-Coordinate Optimistic Gradient Algorithms for Root-Finding Problems
Quoc Tran-Dinh
Yang Luo
152
8
0
08 Jan 2023
Stochastic Variance-Reduced Newton: Accelerating Finite-Sum Minimization with Large Batches
Stochastic Variance-Reduced Newton: Accelerating Finite-Sum Minimization with Large Batches
Michal Derezinski
71
6
0
06 Jun 2022
Adaptive Client Sampling in Federated Learning via Online Learning with Bandit Feedback
Adaptive Client Sampling in Federated Learning via Online Learning with Bandit Feedback
Boxin Zhao
Lingxiao Wang
Mladen Kolar
Ziqi Liu
Qing Cui
Jun Zhou
Chaochao Chen
FedML
106
11
0
28 Dec 2021
Push-SAGA: A decentralized stochastic algorithm with variance reduction
  over directed graphs
Push-SAGA: A decentralized stochastic algorithm with variance reduction over directed graphs
Muhammad I. Qureshi
Ran Xin
S. Kar
U. Khan
64
20
0
13 Aug 2020
Variance Reduction via Accelerated Dual Averaging for Finite-Sum
  Optimization
Variance Reduction via Accelerated Dual Averaging for Finite-Sum Optimization
Chaobing Song
Yong Jiang
Yi-An Ma
91
23
0
18 Jun 2020
Parallel Streaming Wasserstein Barycenters
Parallel Streaming Wasserstein Barycenters
Matthew Staib
Sebastian Claici
Justin Solomon
Stefanie Jegelka
53
89
0
21 May 2017
SARAH: A Novel Method for Machine Learning Problems Using Stochastic
  Recursive Gradient
SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient
Lam M. Nguyen
Jie Liu
K. Scheinberg
Martin Takáč
ODL
157
603
0
01 Mar 2017
Riemannian stochastic variance reduced gradient algorithm with
  retraction and vector transport
Riemannian stochastic variance reduced gradient algorithm with retraction and vector transport
Hiroyuki Sato
Hiroyuki Kasai
Bamdev Mishra
117
58
0
18 Feb 2017
Importance Sampling for Minibatches
Importance Sampling for Minibatches
Dominik Csiba
Peter Richtárik
90
115
0
06 Feb 2016
Finito: A Faster, Permutable Incremental Gradient Method for Big Data
  Problems
Finito: A Faster, Permutable Incremental Gradient Method for Big Data Problems
Aaron Defazio
T. Caetano
Justin Domke
102
169
0
10 Jul 2014
A Proximal Stochastic Gradient Method with Progressive Variance
  Reduction
A Proximal Stochastic Gradient Method with Progressive Variance Reduction
Lin Xiao
Tong Zhang
ODL
147
738
0
19 Mar 2014
Incremental Majorization-Minimization Optimization with Application to
  Large-Scale Machine Learning
Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning
Julien Mairal
144
318
0
18 Feb 2014
Minimizing Finite Sums with the Stochastic Average Gradient
Minimizing Finite Sums with the Stochastic Average Gradient
Mark Schmidt
Nicolas Le Roux
Francis R. Bach
293
1,246
0
10 Sep 2013
Accelerated Proximal Stochastic Dual Coordinate Ascent for Regularized
  Loss Minimization
Accelerated Proximal Stochastic Dual Coordinate Ascent for Regularized Loss Minimization
Shai Shalev-Shwartz
Tong Zhang
ODL
93
463
0
10 Sep 2013
Stochastic Dual Coordinate Ascent Methods for Regularized Loss
  Minimization
Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization
Shai Shalev-Shwartz
Tong Zhang
157
1,032
0
10 Sep 2012
1