ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.03123
  4. Cited By
Efficient Federated Prompt Tuning for Black-box Large Pre-trained Models

Efficient Federated Prompt Tuning for Black-box Large Pre-trained Models

4 October 2023
Zihao Lin
Yan Sun
Yifan Shi
Xueqian Wang
Lifu Huang
Li Shen
Dacheng Tao
ArXivPDFHTML

Papers citing "Efficient Federated Prompt Tuning for Black-box Large Pre-trained Models"

18 / 18 papers shown
Title
Towards Harnessing the Collaborative Power of Large and Small Models for Domain Tasks
Towards Harnessing the Collaborative Power of Large and Small Models for Domain Tasks
Yang Liu
Bingjie Yan
Tianyuan Zou
Jianqing Zhang
Zixuan Gu
...
Jiajian Li
Xiaozhou Ye
Ye Ouyang
Qiang Yang
Wenjie Qu
ALM
155
1
0
24 Apr 2025
Heterogeneous Federated Learning with Splited Language Model
Heterogeneous Federated Learning with Splited Language Model
Yifan Shi
Yuhui Zhang
Ziyue Huang
Xiaofeng Yang
Li Shen
Wei Chen
Xueqian Wang
FedML
34
1
0
24 Mar 2024
On Catastrophic Inheritance of Large Foundation Models
On Catastrophic Inheritance of Large Foundation Models
Hao Chen
Bhiksha Raj
Xing Xie
Jindong Wang
AI4CE
56
12
0
02 Feb 2024
FedRA: A Random Allocation Strategy for Federated Tuning to Unleash the
  Power of Heterogeneous Clients
FedRA: A Random Allocation Strategy for Federated Tuning to Unleash the Power of Heterogeneous Clients
Shangchao Su
Bin Li
Xiangyang Xue
FedML
43
10
0
19 Nov 2023
ZooPFL: Exploring Black-box Foundation Models for Personalized Federated
  Learning
ZooPFL: Exploring Black-box Foundation Models for Personalized Federated Learning
Wang Lu
Hao Yu
Jindong Wang
Damien Teney
Haohan Wang
Yiqiang Chen
Qiang Yang
Xing Xie
Xiangyang Ji
70
8
0
08 Oct 2023
Prompting4Debugging: Red-Teaming Text-to-Image Diffusion Models by
  Finding Problematic Prompts
Prompting4Debugging: Red-Teaming Text-to-Image Diffusion Models by Finding Problematic Prompts
Zhi-Yi Chin
Chieh-Ming Jiang
Ching-Chun Huang
Pin-Yu Chen
Wei-Chen Chiu
DiffM
13
67
0
12 Sep 2023
When Foundation Model Meets Federated Learning: Motivations, Challenges, and Future Directions
When Foundation Model Meets Federated Learning: Motivations, Challenges, and Future Directions
Weiming Zhuang
Chen Chen
Lingjuan Lyu
Cheng Chen
Yaochu Jin
Lingjuan Lyu
AIFin
AI4CE
99
85
0
27 Jun 2023
BlackVIP: Black-Box Visual Prompting for Robust Transfer Learning
BlackVIP: Black-Box Visual Prompting for Robust Transfer Learning
Changdae Oh
Hyeji Hwang
Hee-young Lee
Yongtaek Lim
Geunyoung Jung
Jiyoung Jung
Hosik Choi
Kyungwoo Song
VLM
VPVLM
85
57
0
26 Mar 2023
FedSpeed: Larger Local Interval, Less Communication Round, and Higher
  Generalization Accuracy
FedSpeed: Larger Local Interval, Less Communication Round, and Higher Generalization Accuracy
Yan Sun
Li Shen
Tiansheng Huang
Liang Ding
Dacheng Tao
FedML
36
51
0
21 Feb 2023
BBTv2: Towards a Gradient-Free Future with Large Language Models
BBTv2: Towards a Gradient-Free Future with Large Language Models
Tianxiang Sun
Zhengfu He
Hong Qian
Yunhua Zhou
Xuanjing Huang
Xipeng Qiu
108
53
0
23 May 2022
Instruction Induction: From Few Examples to Natural Language Task
  Descriptions
Instruction Induction: From Few Examples to Natural Language Task Descriptions
Or Honovich
Uri Shaham
Samuel R. Bowman
Omer Levy
ELM
LRM
120
136
0
22 May 2022
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally
  Across Scales and Tasks
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Xiao Liu
Kaixuan Ji
Yicheng Fu
Weng Lam Tam
Zhengxiao Du
Zhilin Yang
Jie Tang
VLM
238
806
0
14 Oct 2021
CPT: Colorful Prompt Tuning for Pre-trained Vision-Language Models
CPT: Colorful Prompt Tuning for Pre-trained Vision-Language Models
Yuan Yao
Ao Zhang
Zhengyan Zhang
Zhiyuan Liu
Tat-Seng Chua
Maosong Sun
MLLM
VPVLM
VLM
208
221
0
24 Sep 2021
Learning to Prompt for Vision-Language Models
Learning to Prompt for Vision-Language Models
Kaiyang Zhou
Jingkang Yang
Chen Change Loy
Ziwei Liu
VPVLM
CLIP
VLM
348
2,271
0
02 Sep 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,848
0
18 Apr 2021
Making Pre-trained Language Models Better Few-shot Learners
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
243
1,919
0
31 Dec 2020
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
297
6,959
0
20 Apr 2018
Xception: Deep Learning with Depthwise Separable Convolutions
Xception: Deep Learning with Depthwise Separable Convolutions
François Chollet
MDE
BDL
PINN
206
14,368
0
07 Oct 2016
1