ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2008.07277
  4. Cited By
Adaptive Hierarchical Hyper-gradient Descent

Adaptive Hierarchical Hyper-gradient Descent

17 August 2020
Renlong Jie
Junbin Gao
A. Vasnev
Minh-Ngoc Tran
ArXivPDFHTML

Papers citing "Adaptive Hierarchical Hyper-gradient Descent"

5 / 5 papers shown
Title
Differentiable Self-Adaptive Learning Rate
Differentiable Self-Adaptive Learning Rate
Bozhou Chen
Hongzhi Wang
Chenmin Ba
ODL
22
4
0
19 Oct 2022
Scalable One-Pass Optimisation of High-Dimensional Weight-Update
  Hyperparameters by Implicit Differentiation
Scalable One-Pass Optimisation of High-Dimensional Weight-Update Hyperparameters by Implicit Differentiation
Ross M. Clarke
E. T. Oldewage
José Miguel Hernández-Lobato
28
9
0
20 Oct 2021
L4: Practical loss-based stepsize adaptation for deep learning
L4: Practical loss-based stepsize adaptation for deep learning
Michal Rolínek
Georg Martius
ODL
44
63
0
14 Feb 2018
Forward and Reverse Gradient-Based Hyperparameter Optimization
Forward and Reverse Gradient-Based Hyperparameter Optimization
Luca Franceschi
Michele Donini
P. Frasconi
Massimiliano Pontil
133
409
0
06 Mar 2017
Linear Convergence of Gradient and Proximal-Gradient Methods Under the
  Polyak-Łojasiewicz Condition
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
Hamed Karimi
J. Nutini
Mark W. Schmidt
139
1,201
0
16 Aug 2016
1