ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1707.08552
  4. Cited By
A Robust Multi-Batch L-BFGS Method for Machine Learning

A Robust Multi-Batch L-BFGS Method for Machine Learning

26 July 2017
A. Berahas
Martin Takáč
    AAML
    ODL
ArXivPDFHTML

Papers citing "A Robust Multi-Batch L-BFGS Method for Machine Learning"

5 / 5 papers shown
Title
Component-Wise Natural Gradient Descent -- An Efficient Neural Network
  Optimization
Component-Wise Natural Gradient Descent -- An Efficient Neural Network Optimization
Tran van Sang
Mhd Irvan
R. Yamaguchi
Toshiyuki Nakata
15
1
0
11 Oct 2022
Adaptive Sampling Quasi-Newton Methods for Zeroth-Order Stochastic
  Optimization
Adaptive Sampling Quasi-Newton Methods for Zeroth-Order Stochastic Optimization
Raghu Bollapragada
Stefan M. Wild
35
11
0
24 Sep 2021
SONIA: A Symmetric Blockwise Truncated Optimization Algorithm
SONIA: A Symmetric Blockwise Truncated Optimization Algorithm
Majid Jahani
M. Nazari
R. Tappenden
A. Berahas
Martin Takávc
ODL
11
10
0
06 Jun 2020
Quasi-Newton Methods for Machine Learning: Forget the Past, Just Sample
Quasi-Newton Methods for Machine Learning: Forget the Past, Just Sample
A. Berahas
Majid Jahani
Peter Richtárik
Martin Takávc
24
40
0
28 Jan 2019
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
308
2,892
0
15 Sep 2016
1