ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.18666
  4. Cited By
BiSLS/SPS: Auto-tune Step Sizes for Stable Bi-level Optimization

BiSLS/SPS: Auto-tune Step Sizes for Stable Bi-level Optimization

30 May 2023
Chen Fan
Gaspard Choné-Ducasse
Mark W. Schmidt
Christos Thrampoulidis
ArXivPDFHTML

Papers citing "BiSLS/SPS: Auto-tune Step Sizes for Stable Bi-level Optimization"

5 / 5 papers shown
Title
An adaptively inexact first-order method for bilevel optimization with application to hyperparameter learning
An adaptively inexact first-order method for bilevel optimization with application to hyperparameter learning
Mohammad Salehi
Subhadip Mukherjee
Lindon Roberts
Matthias Joachim Ehrhardt
26
5
0
19 Aug 2023
A framework for bilevel optimization that enables stochastic and global
  variance reduction algorithms
A framework for bilevel optimization that enables stochastic and global variance reduction algorithms
Mathieu Dagréou
Pierre Ablin
Samuel Vaiter
Thomas Moreau
139
96
0
31 Jan 2022
Sign-MAML: Efficient Model-Agnostic Meta-Learning by SignSGD
Sign-MAML: Efficient Model-Agnostic Meta-Learning by SignSGD
Chen Fan
Parikshit Ram
Sijia Liu
FedML
61
16
0
15 Sep 2021
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
341
11,684
0
09 Mar 2017
Forward and Reverse Gradient-Based Hyperparameter Optimization
Forward and Reverse Gradient-Based Hyperparameter Optimization
Luca Franceschi
Michele Donini
P. Frasconi
Massimiliano Pontil
127
406
0
06 Mar 2017
1