Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1906.03532
Cited By
Reducing the variance in online optimization by transporting past gradients
8 June 2019
Sébastien M. R. Arnold
Pierre-Antoine Manzagol
Reza Babanezhad
Ioannis Mitliagkas
Nicolas Le Roux
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Reducing the variance in online optimization by transporting past gradients"
6 / 6 papers shown
Title
ErrorCompensatedX: error compensation for variance reduced algorithms
Hanlin Tang
Yao Li
Ji Liu
Ming Yan
24
9
0
04 Aug 2021
Self-Tuning Stochastic Optimization with Curvature-Aware Gradient Filtering
Ricky T. Q. Chen
Dami Choi
Lukas Balles
David Duvenaud
Philipp Hennig
ODL
36
6
0
09 Nov 2020
Demon: Improved Neural Network Training with Momentum Decay
John Chen
Cameron R. Wolfe
Zhaoqi Li
Anastasios Kyrillidis
ODL
24
15
0
11 Oct 2019
Momentum and Stochastic Momentum for Stochastic Gradient, Newton, Proximal Point and Subspace Descent Methods
Nicolas Loizou
Peter Richtárik
17
199
0
27 Dec 2017
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
332
11,684
0
09 Mar 2017
A simpler approach to obtaining an O(1/t) convergence rate for the projected stochastic subgradient method
Simon Lacoste-Julien
Mark W. Schmidt
Francis R. Bach
126
259
0
10 Dec 2012
1