Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2210.12050
Cited By
Clip-Tuning: Towards Derivative-free Prompt Learning with a Mixture of Rewards
21 October 2022
Yekun Chai
Shuohuan Wang
Yu Sun
Hao Tian
Hua Wu
Haifeng Wang
VLM
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Clip-Tuning: Towards Derivative-free Prompt Learning with a Mixture of Rewards"
5 / 5 papers shown
Title
Putting People in LLMs' Shoes: Generating Better Answers via Question Rewriter
Junhao Chen
Bowen Wang
Zhouqiang Jiang
Yuta Nakashima
107
1
0
20 Aug 2024
Domain Specialization as the Key to Make Large Language Models Disruptive: A Comprehensive Survey
Chen Ling
Xujiang Zhao
Jiaying Lu
Chengyuan Deng
Can Zheng
...
Chris White
Quanquan Gu
Jian Pei
Carl Yang
Liang Zhao
ALM
169
140
0
30 May 2023
Fine-Tuning Language Models with Just Forward Passes
Sadhika Malladi
Tianyu Gao
Eshaan Nichani
Alexandru Damian
Jason D. Lee
Danqi Chen
Sanjeev Arora
167
205
0
27 May 2023
Small Language Models Improve Giants by Rewriting Their Outputs
Giorgos Vernikos
Arthur Bravzinskas
Jakub Adamek
Jonathan Mallinson
Aliaksei Severyn
Eric Malmi
BDL
LRM
94
16
0
22 May 2023
PromptBoosting: Black-Box Text Classification with Ten Forward Passes
Bairu Hou
J. O'Connor
Jacob Andreas
Shiyu Chang
Yang Zhang
VLM
57
44
0
19 Dec 2022
1