ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2208.11625
  4. Cited By
PromptFL: Let Federated Participants Cooperatively Learn Prompts Instead
  of Models -- Federated Learning in Age of Foundation Model

PromptFL: Let Federated Participants Cooperatively Learn Prompts Instead of Models -- Federated Learning in Age of Foundation Model

24 August 2022
Tao Guo
Song Guo
Junxiao Wang
Wenchao Xu
    FedML
    VLM
    LRM
ArXivPDFHTML

Papers citing "PromptFL: Let Federated Participants Cooperatively Learn Prompts Instead of Models -- Federated Learning in Age of Foundation Model"

31 / 31 papers shown
Title
FedSDAF: Leveraging Source Domain Awareness for Enhanced Federated Domain Generalization
FedSDAF: Leveraging Source Domain Awareness for Enhanced Federated Domain Generalization
Yiming Li
Zesheng Zhou
Zhenbiao Cao
X. Li
Wei Chen
Xinsong Zhang
FedML
44
0
0
05 May 2025
FedMVP: Federated Multi-modal Visual Prompt Tuning for Vision-Language Models
FedMVP: Federated Multi-modal Visual Prompt Tuning for Vision-Language Models
Mainak Singha
Subhankar Roy
Sarthak Mehrotra
Ankit Jha
Moloud Abdar
Biplab Banerjee
Elisa Ricci
VLM
VPVLM
119
0
0
29 Apr 2025
A Survey on Parameter-Efficient Fine-Tuning for Foundation Models in Federated Learning
A Survey on Parameter-Efficient Fine-Tuning for Foundation Models in Federated Learning
Jieming Bian
Yuanzhe Peng
Lei Wang
Yin Huang
Jie Xu
FedML
65
0
0
29 Apr 2025
Capture Global Feature Statistics for One-Shot Federated Learning
Zenghao Guan
Yucan Zhou
Xiaoyan Gu
FedML
68
0
0
10 Mar 2025
FAA-CLIP: Federated Adversarial Adaptation of CLIP
Yihang Wu
A. Chaddad
Christian Desrosiers
Tareef Daqqaq
R. Kateb
VLM
51
0
0
26 Feb 2025
Vision Foundation Models in Medical Image Analysis: Advances and Challenges
Vision Foundation Models in Medical Image Analysis: Advances and Challenges
Pengchen Liang
Bin Pu
Haishan Huang
Yiwei Li
Haoran Wang
Weibo Ma
Qing Chang
VLM
MedIm
106
0
0
24 Feb 2025
Mixture of Experts Made Personalized: Federated Prompt Learning for Vision-Language Models
Mixture of Experts Made Personalized: Federated Prompt Learning for Vision-Language Models
Jun Luo
Chong Chen
Shandong Wu
FedML
VLM
MoE
52
3
0
14 Oct 2024
Selective Aggregation for Low-Rank Adaptation in Federated Learning
Selective Aggregation for Low-Rank Adaptation in Federated Learning
Pengxin Guo
Shuang Zeng
Y. Wang
Huijie Fan
Feifei Wang
Liangqiong Qu
FedML
47
10
0
02 Oct 2024
Merge, Ensemble, and Cooperate! A Survey on Collaborative Strategies in
  the Era of Large Language Models
Merge, Ensemble, and Cooperate! A Survey on Collaborative Strategies in the Era of Large Language Models
Jinliang Lu
Ziliang Pang
Min Xiao
Yaochen Zhu
Rui Xia
Jiajun Zhang
MoMe
52
18
0
08 Jul 2024
On-Demand Model and Client Deployment in Federated Learning with Deep
  Reinforcement Learning
On-Demand Model and Client Deployment in Federated Learning with Deep Reinforcement Learning
M. Chahoud
Hani Sami
Azzam Mourad
Hadi Otrok
Jamal Bentahar
Mohsen Guizani
29
0
0
12 May 2024
Privacy Preserving Prompt Engineering: A Survey
Privacy Preserving Prompt Engineering: A Survey
Kennedy Edemacu
Xintao Wu
58
18
0
09 Apr 2024
Global and Local Prompts Cooperation via Optimal Transport for Federated
  Learning
Global and Local Prompts Cooperation via Optimal Transport for Federated Learning
Hongxia Li
Wei Huang
Jingya Wang
Ye-ling Shi
FedML
VLM
41
19
0
29 Feb 2024
Heterogeneous LoRA for Federated Fine-tuning of On-Device Foundation
  Models
Heterogeneous LoRA for Federated Fine-tuning of On-Device Foundation Models
Yae Jee Cho
Luyang Liu
Zheng Xu
Aldi Fahrezi
Gauri Joshi
38
47
0
12 Jan 2024
FedPEAT: Convergence of Federated Learning, Parameter-Efficient Fine
  Tuning, and Emulator Assisted Tuning for Artificial Intelligence Foundation
  Models with Mobile Edge Computing
FedPEAT: Convergence of Federated Learning, Parameter-Efficient Fine Tuning, and Emulator Assisted Tuning for Artificial Intelligence Foundation Models with Mobile Edge Computing
Terence Jie Chua
Wen-li Yu
Junfeng Zhao
Kwok-Yan Lam
FedML
32
5
0
26 Oct 2023
The Role of Federated Learning in a Wireless World with Foundation
  Models
The Role of Federated Learning in a Wireless World with Foundation Models
Zihan Chen
Howard H. Yang
Y. C. Tay
Kai Fong Ernest Chong
Tony Q.S. Quek
AI4CE
29
6
0
06 Oct 2023
Efficient Model Personalization in Federated Learning via
  Client-Specific Prompt Generation
Efficient Model Personalization in Federated Learning via Client-Specific Prompt Generation
Fu-En Yang
Chien-Yi Wang
Yu-Chiang Frank Wang
VLM
FedML
34
59
0
29 Aug 2023
When Foundation Model Meets Federated Learning: Motivations, Challenges, and Future Directions
When Foundation Model Meets Federated Learning: Motivations, Challenges, and Future Directions
Weiming Zhuang
Chen Chen
Lingjuan Lyu
Chong Chen
Yaochu Jin
Lingjuan Lyu
AIFin
AI4CE
99
85
0
27 Jun 2023
Navigation as Attackers Wish? Towards Building Robust Embodied Agents
  under Federated Learning
Navigation as Attackers Wish? Towards Building Robust Embodied Agents under Federated Learning
Yunchao Zhang
Zonglin Di
KAI-QING Zhou
Cihang Xie
Xin Eric Wang
FedML
AAML
31
2
0
27 Nov 2022
FedTune: A Deep Dive into Efficient Federated Fine-Tuning with
  Pre-trained Transformers
FedTune: A Deep Dive into Efficient Federated Fine-Tuning with Pre-trained Transformers
Jinyu Chen
Wenchao Xu
Song Guo
Junxiao Wang
Jie Zhang
Yining Qi
FedML
33
32
0
15 Nov 2022
Federated Adaptive Prompt Tuning for Multi-Domain Collaborative Learning
Federated Adaptive Prompt Tuning for Multi-Domain Collaborative Learning
Shangchao Su
Min Yang
Bin Li
Xiangyang Xue
VLM
FedML
38
18
0
15 Nov 2022
On the Difficulty of Defending Self-Supervised Learning against Model
  Extraction
On the Difficulty of Defending Self-Supervised Learning against Model Extraction
Adam Dziedzic
Nikita Dhawan
Muhammad Ahmad Kaleem
Jonas Guan
Nicolas Papernot
MIACV
54
22
0
16 May 2022
CLOOB: Modern Hopfield Networks with InfoLOOB Outperform CLIP
CLOOB: Modern Hopfield Networks with InfoLOOB Outperform CLIP
Andreas Fürst
Elisabeth Rumetshofer
Johannes Lehner
Viet-Hung Tran
Fei Tang
...
David P. Kreil
Michael K Kopp
Günter Klambauer
Angela Bitto-Nemling
Sepp Hochreiter
VLM
CLIP
207
102
0
21 Oct 2021
VideoCLIP: Contrastive Pre-training for Zero-shot Video-Text
  Understanding
VideoCLIP: Contrastive Pre-training for Zero-shot Video-Text Understanding
Hu Xu
Gargi Ghosh
Po-Yao (Bernie) Huang
Dmytro Okhonko
Armen Aghajanyan
Florian Metze
Luke Zettlemoyer
Florian Metze Luke Zettlemoyer Christoph Feichtenhofer
CLIP
VLM
259
560
0
28 Sep 2021
Learning to Prompt for Vision-Language Models
Learning to Prompt for Vision-Language Models
Kaiyang Zhou
Jingkang Yang
Chen Change Loy
Ziwei Liu
VPVLM
CLIP
VLM
348
2,279
0
02 Sep 2021
How Much Can CLIP Benefit Vision-and-Language Tasks?
How Much Can CLIP Benefit Vision-and-Language Tasks?
Sheng Shen
Liunian Harold Li
Hao Tan
Joey Tianyi Zhou
Anna Rohrbach
Kai-Wei Chang
Z. Yao
Kurt Keutzer
CLIP
VLM
MLLM
202
405
0
13 Jul 2021
Open-vocabulary Object Detection via Vision and Language Knowledge
  Distillation
Open-vocabulary Object Detection via Vision and Language Knowledge Distillation
Xiuye Gu
Nayeon Lee
Weicheng Kuo
Huayu Chen
VLM
ObjD
225
899
0
28 Apr 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,858
0
18 Apr 2021
Scaling Up Visual and Vision-Language Representation Learning With Noisy
  Text Supervision
Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision
Chao Jia
Yinfei Yang
Ye Xia
Yi-Ting Chen
Zarana Parekh
Hieu H. Pham
Quoc V. Le
Yun-hsuan Sung
Zhen Li
Tom Duerig
VLM
CLIP
337
3,720
0
11 Feb 2021
Federated Learning on Non-IID Data Silos: An Experimental Study
Federated Learning on Non-IID Data Silos: An Experimental Study
Yue Liu
Yiqun Diao
Quan Chen
Bingsheng He
FedML
OOD
101
950
0
03 Feb 2021
WARP: Word-level Adversarial ReProgramming
WARP: Word-level Adversarial ReProgramming
Karen Hambardzumyan
Hrant Khachatrian
Jonathan May
AAML
254
342
0
01 Jan 2021
Making Pre-trained Language Models Better Few-shot Learners
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
243
1,924
0
31 Dec 2020
1