ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1609.08502
  4. Cited By
Exact and Inexact Subsampled Newton Methods for Optimization

Exact and Inexact Subsampled Newton Methods for Optimization

27 September 2016
Raghu Bollapragada
R. Byrd
J. Nocedal
ArXivPDFHTML

Papers citing "Exact and Inexact Subsampled Newton Methods for Optimization"

22 / 22 papers shown
Title
A stochastic gradient method for trilevel optimization
A stochastic gradient method for trilevel optimization
Tommaso Giovannelli
G. Kent
Luis Nunes Vicente
34
0
0
11 May 2025
Efficient Curvature-Aware Hypergradient Approximation for Bilevel Optimization
Efficient Curvature-Aware Hypergradient Approximation for Bilevel Optimization
Youran Dong
Junfeng Yang
Wei-Ting Yao
Jin Zhang
135
0
0
04 May 2025
SAPPHIRE: Preconditioned Stochastic Variance Reduction for Faster Large-Scale Statistical Learning
Jingruo Sun
Zachary Frangella
Madeleine Udell
36
0
0
28 Jan 2025
An Efficient Nonlinear Acceleration method that Exploits Symmetry of the
  Hessian
An Efficient Nonlinear Acceleration method that Exploits Symmetry of the Hessian
Huan He
Shifan Zhao
Z. Tang
Joyce C. Ho
Y. Saad
Yuanzhe Xi
21
3
0
22 Oct 2022
SP2: A Second Order Stochastic Polyak Method
SP2: A Second Order Stochastic Polyak Method
Shuang Li
W. Swartworth
Martin Takávc
Deanna Needell
Robert Mansel Gower
23
13
0
17 Jul 2022
Stochastic Variance-Reduced Newton: Accelerating Finite-Sum Minimization with Large Batches
Stochastic Variance-Reduced Newton: Accelerating Finite-Sum Minimization with Large Batches
Michal Derezinski
52
5
0
06 Jun 2022
Hessian Averaging in Stochastic Newton Methods Achieves Superlinear
  Convergence
Hessian Averaging in Stochastic Newton Methods Achieves Superlinear Convergence
Sen Na
Michal Derezinski
Michael W. Mahoney
27
16
0
20 Apr 2022
Large-Scale Deep Learning Optimizations: A Comprehensive Survey
Large-Scale Deep Learning Optimizations: A Comprehensive Survey
Xiaoxin He
Fuzhao Xue
Xiaozhe Ren
Yang You
27
14
0
01 Nov 2021
slimTrain -- A Stochastic Approximation Method for Training Separable
  Deep Neural Networks
slimTrain -- A Stochastic Approximation Method for Training Separable Deep Neural Networks
Elizabeth Newman
Julianne Chung
Matthias Chung
Lars Ruthotto
47
6
0
28 Sep 2021
Adaptive Sampling Quasi-Newton Methods for Zeroth-Order Stochastic
  Optimization
Adaptive Sampling Quasi-Newton Methods for Zeroth-Order Stochastic Optimization
Raghu Bollapragada
Stefan M. Wild
29
11
0
24 Sep 2021
SVRG Meets AdaGrad: Painless Variance Reduction
SVRG Meets AdaGrad: Painless Variance Reduction
Benjamin Dubois-Taine
Sharan Vaswani
Reza Babanezhad
Mark W. Schmidt
Simon Lacoste-Julien
18
18
0
18 Feb 2021
Adaptive and Oblivious Randomized Subspace Methods for High-Dimensional
  Optimization: Sharp Analysis and Lower Bounds
Adaptive and Oblivious Randomized Subspace Methods for High-Dimensional Optimization: Sharp Analysis and Lower Bounds
Jonathan Lacotte
Mert Pilanci
20
11
0
13 Dec 2020
Learning Rates as a Function of Batch Size: A Random Matrix Theory
  Approach to Neural Network Training
Learning Rates as a Function of Batch Size: A Random Matrix Theory Approach to Neural Network Training
Diego Granziol
S. Zohren
Stephen J. Roberts
ODL
37
48
0
16 Jun 2020
SONIA: A Symmetric Blockwise Truncated Optimization Algorithm
SONIA: A Symmetric Blockwise Truncated Optimization Algorithm
Majid Jahani
M. Nazari
R. Tappenden
A. Berahas
Martin Takávc
ODL
11
10
0
06 Jun 2020
ADAHESSIAN: An Adaptive Second Order Optimizer for Machine Learning
ADAHESSIAN: An Adaptive Second Order Optimizer for Machine Learning
Z. Yao
A. Gholami
Sheng Shen
Mustafa Mustafa
Kurt Keutzer
Michael W. Mahoney
ODL
13
273
0
01 Jun 2020
Adversarial Classification via Distributional Robustness with
  Wasserstein Ambiguity
Adversarial Classification via Distributional Robustness with Wasserstein Ambiguity
Nam Ho-Nguyen
Stephen J. Wright
OOD
42
16
0
28 May 2020
Low Rank Saddle Free Newton: A Scalable Method for Stochastic Nonconvex
  Optimization
Low Rank Saddle Free Newton: A Scalable Method for Stochastic Nonconvex Optimization
Thomas O'Leary-Roseberry
Nick Alger
Omar Ghattas
ODL
37
9
0
07 Feb 2020
High-Dimensional Optimization in Adaptive Random Subspaces
High-Dimensional Optimization in Adaptive Random Subspaces
Jonathan Lacotte
Mert Pilanci
Marco Pavone
27
16
0
27 Jun 2019
Quasi-Newton Methods for Machine Learning: Forget the Past, Just Sample
Quasi-Newton Methods for Machine Learning: Forget the Past, Just Sample
A. Berahas
Majid Jahani
Peter Richtárik
Martin Takávc
21
40
0
28 Jan 2019
GPU Accelerated Sub-Sampled Newton's Method
GPU Accelerated Sub-Sampled Newton's Method
Sudhir B. Kylasa
Farbod Roosta-Khorasani
Michael W. Mahoney
A. Grama
ODL
20
8
0
26 Feb 2018
Optimization Methods for Supervised Machine Learning: From Linear Models
  to Deep Learning
Optimization Methods for Supervised Machine Learning: From Linear Models to Deep Learning
Frank E. Curtis
K. Scheinberg
33
45
0
30 Jun 2017
Generalized Self-Concordant Functions: A Recipe for Newton-Type Methods
Generalized Self-Concordant Functions: A Recipe for Newton-Type Methods
Tianxiao Sun
Quoc Tran-Dinh
16
60
0
14 Mar 2017
1