ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1901.09068
  4. Cited By
Surrogate Losses for Online Learning of Stepsizes in Stochastic
  Non-Convex Optimization
v1v2 (latest)

Surrogate Losses for Online Learning of Stepsizes in Stochastic Non-Convex Optimization

25 January 2019
Zhenxun Zhuang
Ashok Cutkosky
Francesco Orabona
ArXiv (abs)PDFHTML

Papers citing "Surrogate Losses for Online Learning of Stepsizes in Stochastic Non-Convex Optimization"

4 / 4 papers shown
Title
On the Convergence of Stochastic Gradient Descent with Adaptive
  Stepsizes
On the Convergence of Stochastic Gradient Descent with Adaptive Stepsizes
Xiaoyun Li
Francesco Orabona
69
297
0
21 May 2018
Online Learning Rate Adaptation with Hypergradient Descent
Online Learning Rate Adaptation with Hypergradient Descent
A. G. Baydin
R. Cornish
David Martínez-Rubio
Mark Schmidt
Frank Wood
ODL
80
250
0
14 Mar 2017
Stochastic First- and Zeroth-order Methods for Nonconvex Stochastic
  Programming
Stochastic First- and Zeroth-order Methods for Nonconvex Stochastic Programming
Saeed Ghadimi
Guanghui Lan
ODL
122
1,555
0
22 Sep 2013
ADADELTA: An Adaptive Learning Rate Method
ADADELTA: An Adaptive Learning Rate Method
Matthew D. Zeiler
ODL
161
6,630
0
22 Dec 2012
1