ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1508.02810
  4. Cited By
Convergence rates of sub-sampled Newton methods

Convergence rates of sub-sampled Newton methods

12 August 2015
Murat A. Erdogdu
Andrea Montanari
ArXivPDFHTML

Papers citing "Convergence rates of sub-sampled Newton methods"

29 / 29 papers shown
Title
SAPPHIRE: Preconditioned Stochastic Variance Reduction for Faster Large-Scale Statistical Learning
Jingruo Sun
Zachary Frangella
Madeleine Udell
36
0
0
28 Jan 2025
Second-order Information Promotes Mini-Batch Robustness in
  Variance-Reduced Gradients
Second-order Information Promotes Mini-Batch Robustness in Variance-Reduced Gradients
Sachin Garg
A. Berahas
Michal Dereziñski
46
1
0
23 Apr 2024
Eva: A General Vectorized Approximation Framework for Second-order
  Optimization
Eva: A General Vectorized Approximation Framework for Second-order Optimization
Lin Zhang
S. Shi
Bo-wen Li
28
1
0
04 Aug 2023
ISAAC Newton: Input-based Approximate Curvature for Newton's Method
ISAAC Newton: Input-based Approximate Curvature for Newton's Method
Felix Petersen
Tobias Sutter
Christian Borgelt
Dongsung Huh
Hilde Kuehne
Yuekai Sun
Oliver Deussen
ODL
31
5
0
01 May 2023
SP2: A Second Order Stochastic Polyak Method
SP2: A Second Order Stochastic Polyak Method
Shuang Li
W. Swartworth
Martin Takávc
Deanna Needell
Robert Mansel Gower
26
13
0
17 Jul 2022
Stochastic Variance-Reduced Newton: Accelerating Finite-Sum Minimization with Large Batches
Stochastic Variance-Reduced Newton: Accelerating Finite-Sum Minimization with Large Batches
Michal Derezinski
55
5
0
06 Jun 2022
Augmented Newton Method for Optimization: Global Linear Rate and
  Momentum Interpretation
Augmented Newton Method for Optimization: Global Linear Rate and Momentum Interpretation
M. Morshed
ODL
24
1
0
23 May 2022
Hessian Averaging in Stochastic Newton Methods Achieves Superlinear
  Convergence
Hessian Averaging in Stochastic Newton Methods Achieves Superlinear Convergence
Sen Na
Michal Derezinski
Michael W. Mahoney
27
16
0
20 Apr 2022
SHED: A Newton-type algorithm for federated learning based on
  incremental Hessian eigenvector sharing
SHED: A Newton-type algorithm for federated learning based on incremental Hessian eigenvector sharing
Nicolò Dal Fabbro
S. Dey
M. Rossi
Luca Schenato
FedML
29
14
0
11 Feb 2022
SCORE: Approximating Curvature Information under Self-Concordant
  Regularization
SCORE: Approximating Curvature Information under Self-Concordant Regularization
Adeyemi Damilare Adeoye
Alberto Bemporad
20
4
0
14 Dec 2021
Newton-LESS: Sparsification without Trade-offs for the Sketched Newton
  Update
Newton-LESS: Sparsification without Trade-offs for the Sketched Newton Update
Michal Derezinski
Jonathan Lacotte
Mert Pilanci
Michael W. Mahoney
40
26
0
15 Jul 2021
Fractal Structure and Generalization Properties of Stochastic
  Optimization Algorithms
Fractal Structure and Generalization Properties of Stochastic Optimization Algorithms
A. Camuto
George Deligiannidis
Murat A. Erdogdu
Mert Gurbuzbalaban
Umut cSimcsekli
Lingjiong Zhu
33
29
0
09 Jun 2021
Adaptive and Oblivious Randomized Subspace Methods for High-Dimensional
  Optimization: Sharp Analysis and Lower Bounds
Adaptive and Oblivious Randomized Subspace Methods for High-Dimensional Optimization: Sharp Analysis and Lower Bounds
Jonathan Lacotte
Mert Pilanci
20
11
0
13 Dec 2020
Recursive Importance Sketching for Rank Constrained Least Squares:
  Algorithms and High-order Convergence
Recursive Importance Sketching for Rank Constrained Least Squares: Algorithms and High-order Convergence
Yuetian Luo
Wen Huang
Xudong Li
Anru R. Zhang
23
15
0
17 Nov 2020
Hausdorff Dimension, Heavy Tails, and Generalization in Neural Networks
Hausdorff Dimension, Heavy Tails, and Generalization in Neural Networks
Umut Simsekli
Ozan Sener
George Deligiannidis
Murat A. Erdogdu
44
55
0
16 Jun 2020
Scalable Second Order Optimization for Deep Learning
Scalable Second Order Optimization for Deep Learning
Rohan Anil
Vineet Gupta
Tomer Koren
Kevin Regan
Y. Singer
ODL
19
29
0
20 Feb 2020
High-Dimensional Optimization in Adaptive Random Subspaces
High-Dimensional Optimization in Adaptive Random Subspaces
Jonathan Lacotte
Mert Pilanci
Marco Pavone
27
16
0
27 Jun 2019
GPU Accelerated Sub-Sampled Newton's Method
GPU Accelerated Sub-Sampled Newton's Method
Sudhir B. Kylasa
Farbod Roosta-Khorasani
Michael W. Mahoney
A. Grama
ODL
26
8
0
26 Feb 2018
An inexact subsampled proximal Newton-type method for large-scale
  machine learning
An inexact subsampled proximal Newton-type method for large-scale machine learning
Xuanqing Liu
Cho-Jui Hsieh
J. Lee
Yuekai Sun
32
15
0
28 Aug 2017
Efficient Regret Minimization in Non-Convex Games
Efficient Regret Minimization in Non-Convex Games
Elad Hazan
Karan Singh
Cyril Zhang
19
94
0
31 Jul 2017
Optimization Methods for Supervised Machine Learning: From Linear Models
  to Deep Learning
Optimization Methods for Supervised Machine Learning: From Linear Models to Deep Learning
Frank E. Curtis
K. Scheinberg
39
45
0
30 Jun 2017
Sub-sampled Cubic Regularization for Non-convex Optimization
Sub-sampled Cubic Regularization for Non-convex Optimization
Jonas Köhler
Aurelien Lucchi
19
164
0
16 May 2017
Diving into the shallows: a computational perspective on large-scale
  shallow learning
Diving into the shallows: a computational perspective on large-scale shallow learning
Siyuan Ma
M. Belkin
26
76
0
30 Mar 2017
Generalized Self-Concordant Functions: A Recipe for Newton-Type Methods
Generalized Self-Concordant Functions: A Recipe for Newton-Type Methods
Tianxiao Sun
Quoc Tran-Dinh
24
60
0
14 Mar 2017
An empirical analysis of the optimization of deep network loss surfaces
An empirical analysis of the optimization of deep network loss surfaces
Daniel Jiwoong Im
Michael Tao
K. Branson
ODL
35
61
0
13 Dec 2016
Exact and Inexact Subsampled Newton Methods for Optimization
Exact and Inexact Subsampled Newton Methods for Optimization
Raghu Bollapragada
R. Byrd
J. Nocedal
23
176
0
27 Sep 2016
Adaptive Newton Method for Empirical Risk Minimization to Statistical
  Accuracy
Adaptive Newton Method for Empirical Risk Minimization to Statistical Accuracy
Aryan Mokhtari
Alejandro Ribeiro
ODL
17
32
0
24 May 2016
Sub-Sampled Newton Methods II: Local Convergence Rates
Sub-Sampled Newton Methods II: Local Convergence Rates
Farbod Roosta-Khorasani
Michael W. Mahoney
33
83
0
18 Jan 2016
Newton-Stein Method: An optimization method for GLMs via Stein's Lemma
Newton-Stein Method: An optimization method for GLMs via Stein's Lemma
Murat A. Erdogdu
23
13
0
28 Nov 2015
1