ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2201.08531
  4. Cited By
Black-box Prompt Learning for Pre-trained Language Models

Black-box Prompt Learning for Pre-trained Language Models

21 January 2022
Shizhe Diao
Zhichao Huang
Ruijia Xu
Xuechun Li
Yong Lin
Xiao Zhou
Tong Zhang
    VLM
    AAML
ArXivPDFHTML

Papers citing "Black-box Prompt Learning for Pre-trained Language Models"

18 / 18 papers shown
Title
FLOPS: Forward Learning with OPtimal Sampling
FLOPS: Forward Learning with OPtimal Sampling
Tao Ren
Zishi Zhang
Jinyang Jiang
Guanghao Li
Zeliang Zhang
Mingqian Feng
Yijie Peng
37
1
0
08 Oct 2024
CPT: Consistent Proxy Tuning for Black-box Optimization
CPT: Consistent Proxy Tuning for Black-box Optimization
Yuanyang He
Zitong Huang
Xinxing Xu
Rick Siow Mong Goh
Salman Khan
W. Zuo
Yong Liu
Chun-Mei Feng
42
0
0
01 Jul 2024
LLM2FEA: Discover Novel Designs with Generative Evolutionary
  Multitasking
LLM2FEA: Discover Novel Designs with Generative Evolutionary Multitasking
Melvin Wong
Jiao Liu
Thiago Rios
Stefan Menzel
Yew-Soon Ong
55
2
0
21 Jun 2024
A Bayesian approach for prompt optimization in pre-trained language
  models
A Bayesian approach for prompt optimization in pre-trained language models
Antonio Sabbatella
Andrea Ponti
Antonio Candelieri
I. Giordani
F. Archetti
34
1
0
01 Dec 2023
Prompt-Tuning Decision Transformer with Preference Ranking
Prompt-Tuning Decision Transformer with Preference Ranking
Shengchao Hu
Li Shen
Ya-Qin Zhang
Dacheng Tao
OffRL
28
14
0
16 May 2023
Active Prompting with Chain-of-Thought for Large Language Models
Active Prompting with Chain-of-Thought for Large Language Models
Shizhe Diao
Pengcheng Wang
Yong Lin
Tong Zhang
ReLM
KELM
LLMAG
LRM
31
120
0
23 Feb 2023
Technical Report -- Competition Solution for Prompt Tuning using
  Pretrained Language Model
Technical Report -- Competition Solution for Prompt Tuning using Pretrained Language Model
Jiang-Long Song
Wuhe Zou
Feng Li
Xiaolei Qin
Weidong Zhang
28
0
0
13 Dec 2022
Fairness Reprogramming
Fairness Reprogramming
Guanhua Zhang
Yihua Zhang
Yang Zhang
Wenqi Fan
Qing Li
Sijia Liu
Shiyu Chang
AAML
83
38
0
21 Sep 2022
Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for
  Pre-trained Language Models
Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for Pre-trained Language Models
Ning Ding
Yujia Qin
Guang Yang
Fu Wei
Zonghan Yang
...
Jianfei Chen
Yang Liu
Jie Tang
Juan Li
Maosong Sun
32
196
0
14 Mar 2022
Input-Tuning: Adapting Unfamiliar Inputs to Frozen Pretrained Models
Input-Tuning: Adapting Unfamiliar Inputs to Frozen Pretrained Models
Shengnan An
Yifei Li
Zeqi Lin
Qian Liu
Bei Chen
Qiang Fu
Weizhu Chen
Nanning Zheng
Jian-Guang Lou
VLM
AAML
39
39
0
07 Mar 2022
SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
Tu Vu
Brian Lester
Noah Constant
Rami Al-Rfou
Daniel Cer
VLM
LRM
137
277
0
15 Oct 2021
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally
  Across Scales and Tasks
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Xiao Liu
Kaixuan Ji
Yicheng Fu
Weng Lam Tam
Zhengxiao Du
Zhilin Yang
Jie Tang
VLM
238
806
0
14 Oct 2021
Fantastically Ordered Prompts and Where to Find Them: Overcoming
  Few-Shot Prompt Order Sensitivity
Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity
Yao Lu
Max Bartolo
Alastair Moore
Sebastian Riedel
Pontus Stenetorp
AILaw
LRM
279
1,124
0
18 Apr 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,848
0
18 Apr 2021
WARP: Word-level Adversarial ReProgramming
WARP: Word-level Adversarial ReProgramming
Karen Hambardzumyan
Hrant Khachatrian
Jonathan May
AAML
254
342
0
01 Jan 2021
Making Pre-trained Language Models Better Few-shot Learners
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
241
1,919
0
31 Dec 2020
Exploiting Cloze Questions for Few Shot Text Classification and Natural
  Language Inference
Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference
Timo Schick
Hinrich Schütze
258
1,589
0
21 Jan 2020
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
297
6,959
0
20 Apr 2018
1