ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1310.5715
  4. Cited By
Stochastic Gradient Descent, Weighted Sampling, and the Randomized
  Kaczmarz algorithm

Stochastic Gradient Descent, Weighted Sampling, and the Randomized Kaczmarz algorithm

21 October 2013
Deanna Needell
Nathan Srebro
Rachel A. Ward
ArXivPDFHTML

Papers citing "Stochastic Gradient Descent, Weighted Sampling, and the Randomized Kaczmarz algorithm"

50 / 67 papers shown
Title
Better Rates for Random Task Orderings in Continual Linear Models
Better Rates for Random Task Orderings in Continual Linear Models
Itay Evron
Ran Levinstein
Matan Schliserman
Uri Sherman
Tomer Koren
Daniel Soudry
Nathan Srebro
CLL
35
0
0
06 Apr 2025
Geometric Median Matching for Robust k-Subset Selection from Noisy Data
Geometric Median Matching for Robust k-Subset Selection from Noisy Data
Anish Acharya
Sujay Sanghavi
Alexandros G. Dimakis
Inderjit S Dhillon
AAML
57
0
0
01 Apr 2025
A stochastic gradient descent algorithm with random search directions
A stochastic gradient descent algorithm with random search directions
Eméric Gbaguidi
ODL
78
0
0
25 Mar 2025
Multiple Importance Sampling for Stochastic Gradient Estimation
Multiple Importance Sampling for Stochastic Gradient Estimation
Corentin Salaün
Xingchang Huang
Iliyan Georgiev
Niloy J. Mitra
Gurprit Singh
24
1
0
22 Jul 2024
Stochastic Polyak Step-sizes and Momentum: Convergence Guarantees and Practical Performance
Stochastic Polyak Step-sizes and Momentum: Convergence Guarantees and Practical Performance
Dimitris Oikonomou
Nicolas Loizou
53
4
0
06 Jun 2024
On Adaptive Stochastic Optimization for Streaming Data: A Newton's
  Method with O(dN) Operations
On Adaptive Stochastic Optimization for Streaming Data: A Newton's Method with O(dN) Operations
Antoine Godichon-Baggioni
Nicklas Werge
ODL
32
3
0
29 Nov 2023
Computing Approximate $\ell_p$ Sensitivities
Computing Approximate ℓp\ell_pℓp​ Sensitivities
Swati Padmanabhan
David P. Woodruff
Qiuyi Zhang
45
0
0
07 Nov 2023
Demystifying the Myths and Legends of Nonconvex Convergence of SGD
Demystifying the Myths and Legends of Nonconvex Convergence of SGD
Aritra Dutta
El Houcine Bergou
Soumia Boucherouite
Nicklas Werge
M. Kandemir
Xin Li
26
0
0
19 Oct 2023
Weighted Averaged Stochastic Gradient Descent: Asymptotic Normality and Optimality
Weighted Averaged Stochastic Gradient Descent: Asymptotic Normality and Optimality
Ziyang Wei
Wanrong Zhu
W. Wu
24
3
0
13 Jul 2023
Provable convergence guarantees for black-box variational inference
Provable convergence guarantees for black-box variational inference
Justin Domke
Guillaume Garrigos
Robert Mansel Gower
18
18
0
04 Jun 2023
Stochastic Steffensen method
Stochastic Steffensen method
Minda Zhao
Zehua Lai
Lek-Heng Lim
ODL
13
3
0
28 Nov 2022
Private optimization in the interpolation regime: faster rates and
  hardness results
Private optimization in the interpolation regime: faster rates and hardness results
Hilal Asi
Karan N. Chadha
Gary Cheng
John C. Duchi
42
5
0
31 Oct 2022
Information FOMO: The unhealthy fear of missing out on information. A
  method for removing misleading data for healthier models
Information FOMO: The unhealthy fear of missing out on information. A method for removing misleading data for healthier models
Ethan Pickering
T. Sapsis
16
6
0
27 Aug 2022
ALS: Augmented Lagrangian Sketching Methods for Linear Systems
ALS: Augmented Lagrangian Sketching Methods for Linear Systems
M. Morshed
29
0
0
12 Aug 2022
On the fast convergence of minibatch heavy ball momentum
On the fast convergence of minibatch heavy ball momentum
Raghu Bollapragada
Tyler Chen
Rachel A. Ward
24
17
0
15 Jun 2022
High-dimensional limit theorems for SGD: Effective dynamics and critical
  scaling
High-dimensional limit theorems for SGD: Effective dynamics and critical scaling
Gerard Ben Arous
Reza Gheissari
Aukosh Jagannath
49
59
0
08 Jun 2022
GraB: Finding Provably Better Data Permutations than Random Reshuffling
GraB: Finding Provably Better Data Permutations than Random Reshuffling
Yucheng Lu
Wentao Guo
Christopher De Sa
FedML
16
16
0
22 May 2022
Tricks and Plugins to GBM on Images and Sequences
Tricks and Plugins to GBM on Images and Sequences
Biyi Fang
J. Utke
Diego Klabjan
25
0
0
01 Mar 2022
Adaptive Client Sampling in Federated Learning via Online Learning with Bandit Feedback
Adaptive Client Sampling in Federated Learning via Online Learning with Bandit Feedback
Boxin Zhao
Lingxiao Wang
Mladen Kolar
Ziqi Liu
Zhiqiang Zhang
Jun Zhou
Chaochao Chen
FedML
28
10
0
28 Dec 2021
Adaptive Importance Sampling meets Mirror Descent: a Bias-variance
  tradeoff
Adaptive Importance Sampling meets Mirror Descent: a Bias-variance tradeoff
Anna Korba
Franccois Portier
16
12
0
29 Oct 2021
Stochastic gradient descent with noise of machine learning type. Part I:
  Discrete time analysis
Stochastic gradient descent with noise of machine learning type. Part I: Discrete time analysis
Stephan Wojtowytsch
23
50
0
04 May 2021
Reweighting Augmented Samples by Minimizing the Maximal Expected Loss
Reweighting Augmented Samples by Minimizing the Maximal Expected Loss
Mingyang Yi
Lu Hou
Lifeng Shang
Xin Jiang
Qun Liu
Zhi-Ming Ma
10
19
0
16 Mar 2021
On Riemannian Stochastic Approximation Schemes with Fixed Step-Size
On Riemannian Stochastic Approximation Schemes with Fixed Step-Size
Alain Durmus
P. Jiménez
Eric Moulines
Salem Said
18
12
0
15 Feb 2021
Distributed Second Order Methods with Fast Rates and Compressed
  Communication
Distributed Second Order Methods with Fast Rates and Compressed Communication
Rustem Islamov
Xun Qian
Peter Richtárik
19
51
0
14 Feb 2021
Federated Learning under Importance Sampling
Federated Learning under Importance Sampling
Elsa Rizk
Stefan Vlaski
A. H. Sayed
FedML
11
51
0
14 Dec 2020
Optimal Importance Sampling for Federated Learning
Optimal Importance Sampling for Federated Learning
Elsa Rizk
Stefan Vlaski
A. H. Sayed
FedML
32
46
0
26 Oct 2020
Optimization for Supervised Machine Learning: Randomized Algorithms for
  Data and Parameters
Optimization for Supervised Machine Learning: Randomized Algorithms for Data and Parameters
Filip Hanzely
19
0
0
26 Aug 2020
Stochastic Markov Gradient Descent and Training Low-Bit Neural Networks
Stochastic Markov Gradient Descent and Training Low-Bit Neural Networks
Jonathan Ashbrock
A. Powell
MQ
20
5
0
25 Aug 2020
On stochastic mirror descent with interacting particles: convergence
  properties and variance reduction
On stochastic mirror descent with interacting particles: convergence properties and variance reduction
Anastasia Borovykh
N. Kantas
P. Parpas
G. Pavliotis
23
12
0
15 Jul 2020
AdaScale SGD: A User-Friendly Algorithm for Distributed Training
AdaScale SGD: A User-Friendly Algorithm for Distributed Training
Tyler B. Johnson
Pulkit Agrawal
Haijie Gu
Carlos Guestrin
ODL
21
37
0
09 Jul 2020
Federated Learning with Compression: Unified Analysis and Sharp
  Guarantees
Federated Learning with Compression: Unified Analysis and Sharp Guarantees
Farzin Haddadpour
Mohammad Mahdi Kamani
Aryan Mokhtari
M. Mahdavi
FedML
23
271
0
02 Jul 2020
SGD for Structured Nonconvex Functions: Learning Rates, Minibatching and
  Interpolation
SGD for Structured Nonconvex Functions: Learning Rates, Minibatching and Interpolation
Robert Mansel Gower
Othmane Sebbouh
Nicolas Loizou
25
74
0
18 Jun 2020
An Analysis of Constant Step Size SGD in the Non-convex Regime:
  Asymptotic Normality and Bias
An Analysis of Constant Step Size SGD in the Non-convex Regime: Asymptotic Normality and Bias
Lu Yu
Krishnakumar Balasubramanian
S. Volgushev
Murat A. Erdogdu
32
50
0
14 Jun 2020
Random Reshuffling: Simple Analysis with Vast Improvements
Random Reshuffling: Simple Analysis with Vast Improvements
Konstantin Mishchenko
Ahmed Khaled
Peter Richtárik
21
131
0
10 Jun 2020
A Unified Theory of Decentralized SGD with Changing Topology and Local
  Updates
A Unified Theory of Decentralized SGD with Changing Topology and Local Updates
Anastasia Koloskova
Nicolas Loizou
Sadra Boreiri
Martin Jaggi
Sebastian U. Stich
FedML
39
491
0
23 Mar 2020
Stochastic Polyak Step-size for SGD: An Adaptive Learning Rate for Fast
  Convergence
Stochastic Polyak Step-size for SGD: An Adaptive Learning Rate for Fast Convergence
Nicolas Loizou
Sharan Vaswani
I. Laradji
Simon Lacoste-Julien
15
181
0
24 Feb 2020
Sampling and Update Frequencies in Proximal Variance-Reduced Stochastic
  Gradient Methods
Sampling and Update Frequencies in Proximal Variance-Reduced Stochastic Gradient Methods
Martin Morin
Pontus Giselsson
17
4
0
13 Feb 2020
Gradient tracking and variance reduction for decentralized optimization
  and machine learning
Gradient tracking and variance reduction for decentralized optimization and machine learning
Ran Xin
S. Kar
U. Khan
14
10
0
13 Feb 2020
Better Theory for SGD in the Nonconvex World
Better Theory for SGD in the Nonconvex World
Ahmed Khaled
Peter Richtárik
13
178
0
09 Feb 2020
Online Stochastic Gradient Descent with Arbitrary Initialization Solves
  Non-smooth, Non-convex Phase Retrieval
Online Stochastic Gradient Descent with Arbitrary Initialization Solves Non-smooth, Non-convex Phase Retrieval
Yan Shuo Tan
Roman Vershynin
9
35
0
28 Oct 2019
Unified Optimal Analysis of the (Stochastic) Gradient Method
Unified Optimal Analysis of the (Stochastic) Gradient Method
Sebastian U. Stich
21
112
0
09 Jul 2019
Stochastic Gradients for Large-Scale Tensor Decomposition
Stochastic Gradients for Large-Scale Tensor Decomposition
T. Kolda
David Hong
25
55
0
04 Jun 2019
The Step Decay Schedule: A Near Optimal, Geometrically Decaying Learning
  Rate Procedure For Least Squares
The Step Decay Schedule: A Near Optimal, Geometrically Decaying Learning Rate Procedure For Least Squares
Rong Ge
Sham Kakade
Rahul Kidambi
Praneeth Netrapalli
17
149
0
29 Apr 2019
SGD Converges to Global Minimum in Deep Learning via Star-convex Path
SGD Converges to Global Minimum in Deep Learning via Star-convex Path
Yi Zhou
Junjie Yang
Huishuai Zhang
Yingbin Liang
Vahid Tarokh
14
71
0
02 Jan 2019
On the Generalization of Stochastic Gradient Descent with Momentum
On the Generalization of Stochastic Gradient Descent with Momentum
Ali Ramezani-Kebrya
Kimon Antonakopoulos
V. Cevher
Ashish Khisti
Ben Liang
MLT
10
23
0
12 Sep 2018
Mitigating Sybils in Federated Learning Poisoning
Mitigating Sybils in Federated Learning Poisoning
Clement Fung
Chris J. M. Yoon
Ivan Beschastnikh
AAML
11
497
0
14 Aug 2018
A Hybrid Recommender System for Patient-Doctor Matchmaking in Primary
  Care
A Hybrid Recommender System for Patient-Doctor Matchmaking in Primary Care
Qiwei Han
Mengxin Ji
Inigo Martinez de Rituerto de Troya
Manas Gaur
Leid Zejnilovic
6
43
0
09 Aug 2018
Stochastic modified equations for the asynchronous stochastic gradient
  descent
Stochastic modified equations for the asynchronous stochastic gradient descent
Jing An
Jian-wei Lu
Lexing Ying
11
79
0
21 May 2018
Generalization Error Bounds for Noisy, Iterative Algorithms
Generalization Error Bounds for Noisy, Iterative Algorithms
Ankit Pensia
Varun Jog
Po-Ling Loh
10
109
0
12 Jan 2018
Improved asynchronous parallel optimization analysis for stochastic
  incremental methods
Improved asynchronous parallel optimization analysis for stochastic incremental methods
Rémi Leblond
Fabian Pedregosa
Simon Lacoste-Julien
9
70
0
11 Jan 2018
12
Next