ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2101.07367
  4. Cited By
Training Learned Optimizers with Randomly Initialized Learned Optimizers

Training Learned Optimizers with Randomly Initialized Learned Optimizers

14 January 2021
Luke Metz
C. Freeman
Niru Maheswaranathan
Jascha Narain Sohl-Dickstein
ArXivPDFHTML

Papers citing "Training Learned Optimizers with Randomly Initialized Learned Optimizers"

13 / 13 papers shown
Title
Tasks, stability, architecture, and compute: Training more effective
  learned optimizers, and using them to train themselves
Tasks, stability, architecture, and compute: Training more effective learned optimizers, and using them to train themselves
Luke Metz
Niru Maheswaranathan
C. Freeman
Ben Poole
Jascha Narain Sohl-Dickstein
114
62
0
23 Sep 2020
Meta-Gradient Reinforcement Learning with an Objective Discovered Online
Meta-Gradient Reinforcement Learning with an Objective Discovered Online
Zhongwen Xu
H. V. Hasselt
Matteo Hessel
Junhyuk Oh
Satinder Singh
David Silver
71
77
0
16 Jul 2020
Using learned optimizers to make models robust to input noise
Using learned optimizers to make models robust to input noise
Luke Metz
Niru Maheswaranathan
Jonathon Shlens
Jascha Narain Sohl-Dickstein
E. D. Cubuk
VLM
OOD
40
26
0
08 Jun 2019
AI-GAs: AI-generating algorithms, an alternate paradigm for producing
  general artificial intelligence
AI-GAs: AI-generating algorithms, an alternate paradigm for producing general artificial intelligence
Jeff Clune
110
122
0
27 May 2019
Understanding and correcting pathologies in the training of learned
  optimizers
Understanding and correcting pathologies in the training of learned optimizers
Luke Metz
Niru Maheswaranathan
Jeremy Nixon
C. Freeman
Jascha Narain Sohl-Dickstein
ODL
69
148
0
24 Oct 2018
Learning Overparameterized Neural Networks via Stochastic Gradient
  Descent on Structured Data
Learning Overparameterized Neural Networks via Stochastic Gradient Descent on Structured Data
Yuanzhi Li
Yingyu Liang
MLT
190
653
0
03 Aug 2018
Human-level performance in first-person multiplayer games with
  population-based deep reinforcement learning
Human-level performance in first-person multiplayer games with population-based deep reinforcement learning
Max Jaderberg
Wojciech M. Czarnecki
Iain Dunning
Luke Marris
Guy Lever
...
Joel Z Leibo
David Silver
Demis Hassabis
Koray Kavukcuoglu
T. Graepel
OffRL
101
723
0
03 Jul 2018
Meta-Gradient Reinforcement Learning
Meta-Gradient Reinforcement Learning
Zhongwen Xu
H. V. Hasselt
David Silver
104
324
0
24 May 2018
Population Based Training of Neural Networks
Population Based Training of Neural Networks
Max Jaderberg
Valentin Dalibard
Simon Osindero
Wojciech M. Czarnecki
Jeff Donahue
...
Tim Green
Iain Dunning
Karen Simonyan
Chrisantha Fernando
Koray Kavukcuoglu
66
740
0
27 Nov 2017
Neural Optimizer Search with Reinforcement Learning
Neural Optimizer Search with Reinforcement Learning
Irwan Bello
Barret Zoph
Vijay Vasudevan
Quoc V. Le
ODL
53
385
0
21 Sep 2017
Learned Optimizers that Scale and Generalize
Learned Optimizers that Scale and Generalize
Olga Wichrowska
Niru Maheswaranathan
Matthew W. Hoffman
Sergio Gomez Colmenarejo
Misha Denil
Nando de Freitas
Jascha Narain Sohl-Dickstein
AI4CE
52
284
0
14 Mar 2017
Learning to learn by gradient descent by gradient descent
Learning to learn by gradient descent by gradient descent
Marcin Andrychowicz
Misha Denil
Sergio Gomez Colmenarejo
Matthew W. Hoffman
David Pfau
Tom Schaul
Brendan Shillingford
Nando de Freitas
99
2,004
0
14 Jun 2016
Adam: A Method for Stochastic Optimization
Adam: A Method for Stochastic Optimization
Diederik P. Kingma
Jimmy Ba
ODL
1.4K
149,842
0
22 Dec 2014
1