ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.16495
  4. Cited By
Guarantees for Tuning the Step Size using a Learning-to-Learn Approach

Guarantees for Tuning the Step Size using a Learning-to-Learn Approach

30 June 2020
Xiang Wang
Shuai Yuan
Chenwei Wu
Rong Ge
ArXivPDFHTML

Papers citing "Guarantees for Tuning the Step Size using a Learning-to-Learn Approach"

9 / 9 papers shown
Title
From Learning to Optimize to Learning Optimization Algorithms
From Learning to Optimize to Learning Optimization Algorithms
Camille Castera
Peter Ochs
65
1
0
28 May 2024
A Nonstochastic Control Approach to Optimization
A Nonstochastic Control Approach to Optimization
Xinyi Chen
Elad Hazan
47
5
0
19 Jan 2023
Learning-Rate-Free Learning by D-Adaptation
Learning-Rate-Free Learning by D-Adaptation
Aaron Defazio
Konstantin Mishchenko
30
77
0
18 Jan 2023
Federated Hyperparameter Tuning: Challenges, Baselines, and Connections
  to Weight-Sharing
Federated Hyperparameter Tuning: Challenges, Baselines, and Connections to Weight-Sharing
M. Khodak
Renbo Tu
Tian Li
Liam Li
Maria-Florina Balcan
Virginia Smith
Ameet Talwalkar
FedML
43
78
0
08 Jun 2021
Generalization Guarantees for Neural Architecture Search with
  Train-Validation Split
Generalization Guarantees for Neural Architecture Search with Train-Validation Split
Samet Oymak
Mingchen Li
Mahdi Soltanolkotabi
AI4CE
OOD
36
13
0
29 Apr 2021
Learning to Optimize: A Primer and A Benchmark
Learning to Optimize: A Primer and A Benchmark
Tianlong Chen
Xiaohan Chen
Wuyang Chen
Howard Heaton
Jialin Liu
Zhangyang Wang
W. Yin
43
225
0
23 Mar 2021
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
371
11,700
0
09 Mar 2017
Forward and Reverse Gradient-Based Hyperparameter Optimization
Forward and Reverse Gradient-Based Hyperparameter Optimization
Luca Franceschi
Michele Donini
P. Frasconi
Massimiliano Pontil
133
409
0
06 Mar 2017
Stochastic Gradient Descent for Non-smooth Optimization: Convergence
  Results and Optimal Averaging Schemes
Stochastic Gradient Descent for Non-smooth Optimization: Convergence Results and Optimal Averaging Schemes
Ohad Shamir
Tong Zhang
101
571
0
08 Dec 2012
1