ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1605.04131
  4. Cited By
Barzilai-Borwein Step Size for Stochastic Gradient Descent

Barzilai-Borwein Step Size for Stochastic Gradient Descent

13 May 2016
Conghui Tan
Shiqian Ma
Yuhong Dai
Yuqiu Qian
ArXivPDFHTML

Papers citing "Barzilai-Borwein Step Size for Stochastic Gradient Descent"

19 / 19 papers shown
Title
Stochastic Polyak Step-sizes and Momentum: Convergence Guarantees and Practical Performance
Stochastic Polyak Step-sizes and Momentum: Convergence Guarantees and Practical Performance
Dimitris Oikonomou
Nicolas Loizou
55
4
0
06 Jun 2024
Distributed and Scalable Optimization for Robust Proton Treatment
  Planning
Distributed and Scalable Optimization for Robust Proton Treatment Planning
A. Fu
V. Taasti
M. Zarepisheh
24
2
0
27 Apr 2023
Stochastic Steffensen method
Stochastic Steffensen method
Minda Zhao
Zehua Lai
Lek-Heng Lim
ODL
15
3
0
28 Nov 2022
Adaptive scaling of the learning rate by second order automatic
  differentiation
Adaptive scaling of the learning rate by second order automatic differentiation
F. Gournay
Alban Gossard
ODL
23
1
0
26 Oct 2022
A Stochastic Variance Reduced Gradient using Barzilai-Borwein Techniques
  as Second Order Information
A Stochastic Variance Reduced Gradient using Barzilai-Borwein Techniques as Second Order Information
Hardik Tankaria
N. Yamashita
11
1
0
23 Aug 2022
An Adaptive Incremental Gradient Method With Support for Non-Euclidean
  Norms
An Adaptive Incremental Gradient Method With Support for Non-Euclidean Norms
Binghui Xie
Chen Jin
Kaiwen Zhou
James Cheng
Wei Meng
35
1
0
28 Apr 2022
A Stochastic Bundle Method for Interpolating Networks
A Stochastic Bundle Method for Interpolating Networks
Alasdair Paren
Leonard Berrada
Rudra P. K. Poudel
M. P. Kumar
24
4
0
29 Jan 2022
Accelerating Perturbed Stochastic Iterates in Asynchronous Lock-Free
  Optimization
Accelerating Perturbed Stochastic Iterates in Asynchronous Lock-Free Optimization
Kaiwen Zhou
Anthony Man-Cho So
James Cheng
19
1
0
30 Sep 2021
SVRG Meets AdaGrad: Painless Variance Reduction
SVRG Meets AdaGrad: Painless Variance Reduction
Benjamin Dubois-Taine
Sharan Vaswani
Reza Babanezhad
Mark W. Schmidt
Simon Lacoste-Julien
18
18
0
18 Feb 2021
Balancing Rates and Variance via Adaptive Batch-Size for Stochastic
  Optimization Problems
Balancing Rates and Variance via Adaptive Batch-Size for Stochastic Optimization Problems
Zhan Gao
Alec Koppel
Alejandro Ribeiro
14
10
0
02 Jul 2020
Stochastic Polyak Step-size for SGD: An Adaptive Learning Rate for Fast
  Convergence
Stochastic Polyak Step-size for SGD: An Adaptive Learning Rate for Fast Convergence
Nicolas Loizou
Sharan Vaswani
I. Laradji
Simon Lacoste-Julien
27
181
0
24 Feb 2020
Optimization for deep learning: theory and algorithms
Optimization for deep learning: theory and algorithms
Ruoyu Sun
ODL
14
168
0
19 Dec 2019
Fast Stochastic Ordinal Embedding with Variance Reduction and Adaptive
  Step Size
Fast Stochastic Ordinal Embedding with Variance Reduction and Adaptive Step Size
Ke Ma
Jinshan Zeng
Qianqian Xu
Xiaochun Cao
Wei Liu
Yuan Yao
20
3
0
01 Dec 2019
Block stochastic gradient descent for large-scale tomographic
  reconstruction in a parallel network
Block stochastic gradient descent for large-scale tomographic reconstruction in a parallel network
Yushan Gao
A. Biguri
T. Blumensath
18
3
0
28 Mar 2019
AdaGrad stepsizes: Sharp convergence over nonconvex landscapes
AdaGrad stepsizes: Sharp convergence over nonconvex landscapes
Rachel A. Ward
Xiaoxia Wu
Léon Bottou
ODL
19
358
0
05 Jun 2018
SPSA-FSR: Simultaneous Perturbation Stochastic Approximation for Feature
  Selection and Ranking
SPSA-FSR: Simultaneous Perturbation Stochastic Approximation for Feature Selection and Ranking
Zeren D. Yenice
Niranjan Adhikari
Yong Kai Wong
V. Aksakalli
A. T. Gumus
B. Abbasi
25
8
0
16 Apr 2018
Block-Cyclic Stochastic Coordinate Descent for Deep Neural Networks
Block-Cyclic Stochastic Coordinate Descent for Deep Neural Networks
Kensuke Nakamura
Stefano Soatto
Byung-Woo Hong
BDL
ODL
35
6
0
20 Nov 2017
Big Batch SGD: Automated Inference using Adaptive Batch Sizes
Big Batch SGD: Automated Inference using Adaptive Batch Sizes
Soham De
A. Yadav
David Jacobs
Tom Goldstein
ODL
14
62
0
18 Oct 2016
A Proximal Stochastic Gradient Method with Progressive Variance
  Reduction
A Proximal Stochastic Gradient Method with Progressive Variance Reduction
Lin Xiao
Tong Zhang
ODL
84
736
0
19 Mar 2014
1