Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2209.11208
Cited By
A Closer Look at Learned Optimization: Stability, Robustness, and Inductive Biases
22 September 2022
James Harrison
Luke Metz
Jascha Narain Sohl-Dickstein
Re-assign community
ArXiv
PDF
HTML
Papers citing
"A Closer Look at Learned Optimization: Stability, Robustness, and Inductive Biases"
22 / 22 papers shown
Title
Unbiased Gradient Estimation in Unrolled Computation Graphs with Persistent Evolution Strategies
Paul Vicol
Luke Metz
Jascha Narain Sohl-Dickstein
48
69
0
27 Dec 2021
Accelerating Quadratic Optimization with Reinforcement Learning
Jeffrey Ichnowski
Paras Jain
Bartolomeo Stellato
G. Banjac
Michael Luo
Francesco Borrelli
Joseph E. Gonzalez
Ion Stoica
Ken Goldberg
OffRL
26
36
0
22 Jul 2021
A Generalizable Approach to Learning Optimizers
Diogo Almeida
Clemens Winter
Jie Tang
Wojciech Zaremba
AI4CE
44
29
0
02 Jun 2021
Reverse engineering learned optimizers reveals known and novel mechanisms
Niru Maheswaranathan
David Sussillo
Luke Metz
Ruoxi Sun
Jascha Narain Sohl-Dickstein
55
22
0
04 Nov 2020
Deep learning versus kernel learning: an empirical study of loss landscape geometry and the time evolution of the Neural Tangent Kernel
Stanislav Fort
Gintare Karolina Dziugaite
Mansheej Paul
Sepideh Kharaghani
Daniel M. Roy
Surya Ganguli
78
187
0
28 Oct 2020
Training Stronger Baselines for Learning to Optimize
Tianlong Chen
Weiyi Zhang
Jingyang Zhou
Shiyu Chang
Sijia Liu
Lisa Amini
Zhangyang Wang
OffRL
40
52
0
18 Oct 2020
Momentum via Primal Averaging: Theoretical Insights and Learning Rate Schedules for Non-Convex Optimization
Aaron Defazio
14
22
0
01 Oct 2020
Tasks, stability, architecture, and compute: Training more effective learned optimizers, and using them to train themselves
Luke Metz
Niru Maheswaranathan
C. Freeman
Ben Poole
Jascha Narain Sohl-Dickstein
105
62
0
23 Sep 2020
Meta-Learning in Neural Networks: A Survey
Timothy M. Hospedales
Antreas Antoniou
P. Micaelli
Amos Storkey
OOD
246
1,950
0
11 Apr 2020
Using a thousand optimization tasks to learn hyperparameter search strategies
Luke Metz
Niru Maheswaranathan
Ruoxi Sun
C. Freeman
Ben Poole
Jascha Narain Sohl-Dickstein
37
46
0
27 Feb 2020
Continuous Meta-Learning without Tasks
James Harrison
Apoorva Sharma
Chelsea Finn
Marco Pavone
CLL
OOD
63
79
0
18 Dec 2019
First-Order Preconditioning via Hypergradient Descent
Theodore H. Moskovitz
Rui Wang
Janice Lan
Sanyam Kapoor
Thomas Miconi
J. Yosinski
Aditya Rawal
AI4CE
39
8
0
18 Oct 2019
Learning an Adaptive Learning Rate Schedule
Zhen Xu
Andrew M. Dai
Jonas Kemp
Luke Metz
32
62
0
20 Sep 2019
Understanding and correcting pathologies in the training of learned optimizers
Luke Metz
Niru Maheswaranathan
Jeremy Nixon
C. Freeman
Jascha Narain Sohl-Dickstein
ODL
48
148
0
24 Oct 2018
Meta-Learning: A Survey
Joaquin Vanschoren
FedML
OOD
47
756
0
08 Oct 2018
Meta-learning with differentiable closed-form solvers
Luca Bertinetto
João F. Henriques
Philip Torr
Andrea Vedaldi
ODL
62
924
0
21 May 2018
Recasting Gradient-Based Meta-Learning as Hierarchical Bayes
Erin Grant
Chelsea Finn
Sergey Levine
Trevor Darrell
Thomas Griffiths
BDL
43
505
0
26 Jan 2018
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
735
11,793
0
09 Mar 2017
Learning to reinforcement learn
Jane X. Wang
Z. Kurth-Nelson
Dhruva Tirumala
Hubert Soyer
Joel Z Leibo
Rémi Munos
Charles Blundell
D. Kumaran
M. Botvinick
OffRL
54
974
0
17 Nov 2016
Entropy-SGD: Biasing Gradient Descent Into Wide Valleys
Pratik Chaudhari
A. Choromańska
Stefano Soatto
Yann LeCun
Carlo Baldassi
C. Borgs
J. Chayes
Levent Sagun
R. Zecchina
ODL
63
769
0
06 Nov 2016
Learning to learn by gradient descent by gradient descent
Marcin Andrychowicz
Misha Denil
Sergio Gomez Colmenarejo
Matthew W. Hoffman
David Pfau
Tom Schaul
Brendan Shillingford
Nando de Freitas
58
2,000
0
14 Jun 2016
No More Pesky Learning Rates
Tom Schaul
Sixin Zhang
Yann LeCun
59
477
0
06 Jun 2012
1