ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.19442
  4. Cited By
SimFBO: Towards Simple, Flexible and Communication-efficient Federated
  Bilevel Learning

SimFBO: Towards Simple, Flexible and Communication-efficient Federated Bilevel Learning

30 May 2023
Yifan Yang
Peiyao Xiao
Kaiyi Ji
    FedML
ArXivPDFHTML

Papers citing "SimFBO: Towards Simple, Flexible and Communication-efficient Federated Bilevel Learning"

11 / 11 papers shown
Title
Fully First-Order Methods for Decentralized Bilevel Optimization
Fully First-Order Methods for Decentralized Bilevel Optimization
Xiaoyu Wang
Xuxing Chen
Shiqian Ma
Tong Zhang
42
0
0
25 Oct 2024
A Single-Loop Algorithm for Decentralized Bilevel Optimization
A Single-Loop Algorithm for Decentralized Bilevel Optimization
Youran Dong
Shiqian Ma
Junfeng Yang
Chao Yin
39
7
0
15 Nov 2023
Network Utility Maximization with Unknown Utility Functions: A
  Distributed, Data-Driven Bilevel Optimization Approach
Network Utility Maximization with Unknown Utility Functions: A Distributed, Data-Driven Bilevel Optimization Approach
Kaiyi Ji
Lei Ying
27
7
0
04 Jan 2023
A Penalty-Based Method for Communication-Efficient Decentralized Bilevel
  Programming
A Penalty-Based Method for Communication-Efficient Decentralized Bilevel Programming
Parvin Nazari
Ahmad Mousavi
Davoud Ataee Tarzanagh
George Michailidis
41
4
0
08 Nov 2022
BOME! Bilevel Optimization Made Easy: A Simple First-Order Approach
BOME! Bilevel Optimization Made Easy: A Simple First-Order Approach
Mao Ye
B. Liu
S. Wright
Peter Stone
Qian Liu
72
83
0
19 Sep 2022
A framework for bilevel optimization that enables stochastic and global
  variance reduction algorithms
A framework for bilevel optimization that enables stochastic and global variance reduction algorithms
Mathieu Dagréou
Pierre Ablin
Samuel Vaiter
Thomas Moreau
139
96
0
31 Jan 2022
Amortized Implicit Differentiation for Stochastic Bilevel Optimization
Amortized Implicit Differentiation for Stochastic Bilevel Optimization
Michael Arbel
Julien Mairal
105
58
0
29 Nov 2021
Linear Convergence in Federated Learning: Tackling Client Heterogeneity
  and Sparse Gradients
Linear Convergence in Federated Learning: Tackling Client Heterogeneity and Sparse Gradients
A. Mitra
Rayana H. Jaafar
George J. Pappas
Hamed Hassani
FedML
55
157
0
14 Feb 2021
Bilevel Programming for Hyperparameter Optimization and Meta-Learning
Bilevel Programming for Hyperparameter Optimization and Meta-Learning
Luca Franceschi
P. Frasconi
Saverio Salzo
Riccardo Grazzi
Massimiliano Pontil
112
718
0
13 Jun 2018
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
463
11,715
0
09 Mar 2017
Forward and Reverse Gradient-Based Hyperparameter Optimization
Forward and Reverse Gradient-Based Hyperparameter Optimization
Luca Franceschi
Michele Donini
P. Frasconi
Massimiliano Pontil
133
409
0
06 Mar 2017
1