Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1901.09068
Cited By
v1
v2 (latest)
Surrogate Losses for Online Learning of Stepsizes in Stochastic Non-Convex Optimization
25 January 2019
Zhenxun Zhuang
Ashok Cutkosky
Francesco Orabona
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Surrogate Losses for Online Learning of Stepsizes in Stochastic Non-Convex Optimization"
4 / 4 papers shown
Title
On the Convergence of Stochastic Gradient Descent with Adaptive Stepsizes
Xiaoyun Li
Francesco Orabona
69
297
0
21 May 2018
Online Learning Rate Adaptation with Hypergradient Descent
A. G. Baydin
R. Cornish
David Martínez-Rubio
Mark Schmidt
Frank Wood
ODL
80
250
0
14 Mar 2017
Stochastic First- and Zeroth-order Methods for Nonconvex Stochastic Programming
Saeed Ghadimi
Guanghui Lan
ODL
122
1,555
0
22 Sep 2013
ADADELTA: An Adaptive Learning Rate Method
Matthew D. Zeiler
ODL
161
6,630
0
22 Dec 2012
1