Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2006.03949
Cited By
SONIA: A Symmetric Blockwise Truncated Optimization Algorithm
6 June 2020
Majid Jahani
M. Nazari
R. Tappenden
A. Berahas
Martin Takávc
ODL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"SONIA: A Symmetric Blockwise Truncated Optimization Algorithm"
25 / 25 papers shown
Title
Scaling Up Quasi-Newton Algorithms: Communication Efficient Distributed SR1
Majid Jahani
M. Nazari
S. Rusakov
A. Berahas
Martin Takávc
38
14
0
30 May 2019
Efficient Distributed Hessian Free Algorithm for Large-scale Empirical Risk Minimization via Accumulating Sample Strategy
Majid Jahani
Xi He
Chenxin Ma
Aryan Mokhtari
Dheevatsa Mudigere
Alejandro Ribeiro
Martin Takáč
47
18
0
26 Oct 2018
Trust-Region Algorithms for Training Responses: Machine Learning Methods Using Indefinite Hessian Approximations
Jennifer B. Erway
J. Griffin
Roummel F. Marcia
Riadh Omheni
25
24
0
01 Jul 2018
Adaptive Sampling Strategies for Stochastic Optimization
Raghu Bollapragada
R. Byrd
J. Nocedal
39
116
0
30 Oct 2017
Fast and Safe: Accelerated gradient methods with optimality certificates and underestimate sequences
Majid Jahani
N. V. C. Gudapati
Chenxin Ma
R. Tappenden
Martin Takáč
29
7
0
10 Oct 2017
Second-Order Optimization for Non-Convex Machine Learning: An Empirical Study
Peng Xu
Farbod Roosta-Khorasani
Michael W. Mahoney
ODL
50
143
0
25 Aug 2017
A Robust Multi-Batch L-BFGS Method for Machine Learning
A. Berahas
Martin Takáč
AAML
ODL
81
44
0
26 Jul 2017
An Investigation of Newton-Sketch and Subsampled Newton Methods
A. Berahas
Raghu Bollapragada
J. Nocedal
44
111
0
17 May 2017
SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient
Lam M. Nguyen
Jie Liu
K. Scheinberg
Martin Takáč
ODL
146
601
0
01 Mar 2017
Exact and Inexact Subsampled Newton Methods for Optimization
Raghu Bollapragada
R. Byrd
J. Nocedal
49
178
0
27 Sep 2016
Optimization Methods for Large-Scale Machine Learning
Léon Bottou
Frank E. Curtis
J. Nocedal
173
3,198
0
15 Jun 2016
Adaptive Newton Method for Empirical Risk Minimization to Statistical Accuracy
Aryan Mokhtari
Alejandro Ribeiro
ODL
34
32
0
24 May 2016
A Multi-Batch L-BFGS Method for Machine Learning
A. Berahas
J. Nocedal
Martin Takáč
ODL
52
110
0
19 May 2016
Stochastic Variance Reduction for Nonconvex Optimization
Sashank J. Reddi
Ahmed S. Hefny
S. Sra
Barnabás Póczós
Alex Smola
80
598
0
19 Mar 2016
Stop Wasting My Gradients: Practical SVRG
Reza Babanezhad
Mohamed Osama Ahmed
Alim Virani
Mark Schmidt
Jakub Konecný
Scott Sallinen
50
134
0
05 Nov 2015
adaQN: An Adaptive Quasi-Newton Algorithm for Training RNNs
N. Keskar
A. Berahas
ODL
35
35
0
04 Nov 2015
Adam: A Method for Stochastic Optimization
Diederik P. Kingma
Jimmy Ba
ODL
813
149,474
0
22 Dec 2014
Global Convergence of Online Limited Memory BFGS
Aryan Mokhtari
Alejandro Ribeiro
56
164
0
06 Sep 2014
A Stochastic Quasi-Newton Method for Large-Scale Optimization
R. Byrd
Samantha Hansen
J. Nocedal
Y. Singer
ODL
76
471
0
27 Jan 2014
Minimizing Finite Sums with the Stochastic Average Gradient
Mark Schmidt
Nicolas Le Roux
Francis R. Bach
245
1,246
0
10 Sep 2013
Inexact Coordinate Descent: Complexity and Preconditioning
R. Tappenden
Peter Richtárik
J. Gondzio
55
100
0
19 Apr 2013
Parallel Coordinate Descent Methods for Big Data Optimization
Peter Richtárik
Martin Takáč
85
487
0
04 Dec 2012
Iteration Complexity of Randomized Block-Coordinate Descent Methods for Minimizing a Composite Function
Peter Richtárik
Martin Takáč
78
769
0
14 Jul 2011
HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent
Feng Niu
Benjamin Recht
Christopher Ré
Stephen J. Wright
137
2,272
0
28 Jun 2011
Hybrid Deterministic-Stochastic Methods for Data Fitting
M. Friedlander
Mark Schmidt
129
387
0
13 Apr 2011
1