ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.03518
  4. Cited By
Black-box Prompt Tuning with Subspace Learning

Black-box Prompt Tuning with Subspace Learning

4 May 2023
Yuanhang Zheng
Zhixing Tan
Peng Li
Yang Liu
    VLM
ArXivPDFHTML

Papers citing "Black-box Prompt Tuning with Subspace Learning"

12 / 12 papers shown
Title
MAPO: Boosting Large Language Model Performance with Model-Adaptive
  Prompt Optimization
MAPO: Boosting Large Language Model Performance with Model-Adaptive Prompt Optimization
Yuyan Chen
Zhihao Wen
Ge Fan
Zhengyu Chen
Wei Yu Wu
Dayiheng Liu
Zhixu Li
Bang Liu
Yanghua Xiao
39
18
0
04 Jul 2024
When Large Language Model Meets Optimization
When Large Language Model Meets Optimization
Sen Huang
Kaixiang Yang
Sheng Qi
Rui Wang
55
8
0
16 May 2024
When Large Language Models Meet Evolutionary Algorithms: Potential Enhancements and Challenges
When Large Language Models Meet Evolutionary Algorithms: Potential Enhancements and Challenges
Wang Chao
Jiaxuan Zhao
Licheng Jiao
Lingling Li
Fang Liu
Shuyuan Yang
75
13
0
19 Jan 2024
When Do Prompting and Prefix-Tuning Work? A Theory of Capabilities and
  Limitations
When Do Prompting and Prefix-Tuning Work? A Theory of Capabilities and Limitations
Aleksandar Petrov
Philip H. S. Torr
Adel Bibi
VPVLM
30
21
0
30 Oct 2023
Efficient Federated Prompt Tuning for Black-box Large Pre-trained Models
Efficient Federated Prompt Tuning for Black-box Large Pre-trained Models
Zihao Lin
Yan Sun
Yifan Shi
Xueqian Wang
Lifu Huang
Li Shen
Dacheng Tao
36
11
0
04 Oct 2023
Learning a Better Initialization for Soft Prompts via Meta-Learning
Learning a Better Initialization for Soft Prompts via Meta-Learning
Yukun Huang
Kun Qian
Zhou Yu
VLM
47
9
0
25 May 2022
BBTv2: Towards a Gradient-Free Future with Large Language Models
BBTv2: Towards a Gradient-Free Future with Large Language Models
Tianxiang Sun
Zhengfu He
Hong Qian
Yunhua Zhou
Xuanjing Huang
Xipeng Qiu
108
53
0
23 May 2022
SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
Tu Vu
Brian Lester
Noah Constant
Rami Al-Rfou
Daniel Cer
VLM
LRM
137
277
0
15 Oct 2021
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally
  Across Scales and Tasks
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Xiao Liu
Kaixuan Ji
Yicheng Fu
Weng Lam Tam
Zhengxiao Du
Zhilin Yang
Jie Tang
VLM
238
806
0
14 Oct 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,848
0
18 Apr 2021
Making Pre-trained Language Models Better Few-shot Learners
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
241
1,919
0
31 Dec 2020
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
341
11,684
0
09 Mar 2017
1