ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2203.03131
  4. Cited By
Input-Tuning: Adapting Unfamiliar Inputs to Frozen Pretrained Models

Input-Tuning: Adapting Unfamiliar Inputs to Frozen Pretrained Models

7 March 2022
Shengnan An
Yifei Li
Zeqi Lin
Qian Liu
Bei Chen
Qiang Fu
Weizhu Chen
Nanning Zheng
Jian-Guang Lou
    VLM
    AAML
ArXivPDFHTML

Papers citing "Input-Tuning: Adapting Unfamiliar Inputs to Frozen Pretrained Models"

35 / 35 papers shown
Title
Correlation-Aware Select and Merge Attention for Efficient Fine-Tuning
  and Context Length Extension
Correlation-Aware Select and Merge Attention for Efficient Fine-Tuning and Context Length Extension
Ning Wang
Zekun Li
Tongxin Bai
Guoqi Li
27
0
0
05 Oct 2024
Propulsion: Steering LLM with Tiny Fine-Tuning
Propulsion: Steering LLM with Tiny Fine-Tuning
Md. Kowsher
Nusrat Jahan Prottasha
Prakash Bhat
48
4
0
17 Sep 2024
UNLEARN Efficient Removal of Knowledge in Large Language Models
UNLEARN Efficient Removal of Knowledge in Large Language Models
Tyler Lizzo
Larry Heck
KELM
MoMe
MU
40
1
0
08 Aug 2024
AdaMoE: Token-Adaptive Routing with Null Experts for Mixture-of-Experts
  Language Models
AdaMoE: Token-Adaptive Routing with Null Experts for Mixture-of-Experts Language Models
Zihao Zeng
Yibo Miao
Hongcheng Gao
Hao Zhang
Zhijie Deng
MoE
44
7
0
19 Jun 2024
Concentrate Attention: Towards Domain-Generalizable Prompt Optimization
  for Language Models
Concentrate Attention: Towards Domain-Generalizable Prompt Optimization for Language Models
Chengzhengxu Li
Xiaoming Liu
Zhaohan Zhang
Yichen Wang
Chen Liu
Y. Lan
Chao Shen
60
2
0
15 Jun 2024
Tox-BART: Leveraging Toxicity Attributes for Explanation Generation of
  Implicit Hate Speech
Tox-BART: Leveraging Toxicity Attributes for Explanation Generation of Implicit Hate Speech
Neemesh Yadav
Sarah Masud
Vikram Goyal
Vikram Goyal
Md. Shad Akhtar
Tanmoy Chakraborty
28
3
0
06 Jun 2024
Mixture of LoRA Experts
Mixture of LoRA Experts
Xun Wu
Shaohan Huang
Furu Wei
MoMe
40
52
0
21 Apr 2024
Joint Visual and Text Prompting for Improved Object-Centric Perception
  with Multimodal Large Language Models
Joint Visual and Text Prompting for Improved Object-Centric Perception with Multimodal Large Language Models
Songtao Jiang
Yan Zhang
Chenyi Zhou
Yeying Jin
Yang Feng
Jian Wu
Zuozhu Liu
LRM
VLM
50
4
0
06 Apr 2024
Hierarchical Recurrent Adapters for Efficient Multi-Task Adaptation of
  Large Speech Models
Hierarchical Recurrent Adapters for Efficient Multi-Task Adaptation of Large Speech Models
Tsendsuren Munkhdalai
Youzheng Chen
K. Sim
Fadi Biadsy
Tara N. Sainath
P. M. Mengibar
24
1
0
25 Mar 2024
SPT: Fine-Tuning Transformer-based Language Models Efficiently with
  Sparsification
SPT: Fine-Tuning Transformer-based Language Models Efficiently with Sparsification
Yuntao Gui
Xiao Yan
Peiqi Yin
Han Yang
James Cheng
40
2
0
16 Dec 2023
Learning From Mistakes Makes LLM Better Reasoner
Learning From Mistakes Makes LLM Better Reasoner
Shengnan An
Zexiong Ma
Zeqi Lin
Nanning Zheng
Jian-Guang Lou
Weizhu Chen
LRM
29
75
0
31 Oct 2023
Orchestration of Emulator Assisted Mobile Edge Tuning for AI Foundation
  Models: A Multi-Agent Deep Reinforcement Learning Approach
Orchestration of Emulator Assisted Mobile Edge Tuning for AI Foundation Models: A Multi-Agent Deep Reinforcement Learning Approach
Wen-li Yu
Terence Jie Chua
Junfeng Zhao
23
2
0
26 Oct 2023
FedPEAT: Convergence of Federated Learning, Parameter-Efficient Fine
  Tuning, and Emulator Assisted Tuning for Artificial Intelligence Foundation
  Models with Mobile Edge Computing
FedPEAT: Convergence of Federated Learning, Parameter-Efficient Fine Tuning, and Emulator Assisted Tuning for Artificial Intelligence Foundation Models with Mobile Edge Computing
Terence Jie Chua
Wen-li Yu
Junfeng Zhao
Kwok-Yan Lam
FedML
26
5
0
26 Oct 2023
ModuLoRA: Finetuning 2-Bit LLMs on Consumer GPUs by Integrating with
  Modular Quantizers
ModuLoRA: Finetuning 2-Bit LLMs on Consumer GPUs by Integrating with Modular Quantizers
Junjie Yin
Jiahao Dong
Yingheng Wang
Christopher De Sa
Volodymyr Kuleshov
MQ
23
4
0
28 Sep 2023
LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models
LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models
Yukang Chen
Shengju Qian
Haotian Tang
Xin Lai
Zhijian Liu
Song Han
Jiaya Jia
42
152
0
21 Sep 2023
LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA
  Composition
LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition
Chengsong Huang
Qian Liu
Bill Yuchen Lin
Tianyu Pang
Chao Du
Min-Bin Lin
MoMe
38
185
0
25 Jul 2023
Scaling In-Context Demonstrations with Structured Attention
Scaling In-Context Demonstrations with Structured Attention
Tianle Cai
Kaixuan Huang
Jason D. Lee
Mengdi Wang
LRM
31
8
0
05 Jul 2023
Continual Learning with Pretrained Backbones by Tuning in the Input
  Space
Continual Learning with Pretrained Backbones by Tuning in the Input Space
Simone Marullo
Matteo Tiezzi
Marco Gori
S. Melacci
Tinne Tuytelaars
CLL
27
2
0
05 Jun 2023
Preference-grounded Token-level Guidance for Language Model Fine-tuning
Preference-grounded Token-level Guidance for Language Model Fine-tuning
Shentao Yang
Shujian Zhang
Congying Xia
Yihao Feng
Caiming Xiong
Mi Zhou
29
23
0
01 Jun 2023
QLoRA: Efficient Finetuning of Quantized LLMs
QLoRA: Efficient Finetuning of Quantized LLMs
Tim Dettmers
Artidoro Pagnoni
Ari Holtzman
Luke Zettlemoyer
ALM
41
2,342
0
23 May 2023
When Gradient Descent Meets Derivative-Free Optimization: A Match Made
  in Black-Box Scenario
When Gradient Descent Meets Derivative-Free Optimization: A Match Made in Black-Box Scenario
Chengcheng Han
Liqing Cui
Renyu Zhu
J. Wang
Nuo Chen
Qiushi Sun
Xiang Li
Ming Gao
33
7
0
17 May 2023
Make Prompt-based Black-Box Tuning Colorful: Boosting Model
  Generalization from Three Orthogonal Perspectives
Make Prompt-based Black-Box Tuning Colorful: Boosting Model Generalization from Three Orthogonal Perspectives
Qiushi Sun
Chengcheng Han
Nuo Chen
Renyu Zhu
Jing Gong
Xiang Li
Ming Gao
VLM
27
8
0
14 May 2023
"What It Wants Me To Say": Bridging the Abstraction Gap Between End-User
  Programmers and Code-Generating Large Language Models
"What It Wants Me To Say": Bridging the Abstraction Gap Between End-User Programmers and Code-Generating Large Language Models
Michael Xieyang Liu
Advait Sarkar
Carina Negreanu
B. Zorn
Jack Williams
N. Toronto
Andrew D. Gordon
29
106
0
13 Apr 2023
A New Benchmark: On the Utility of Synthetic Data with Blender for Bare
  Supervised Learning and Downstream Domain Adaptation
A New Benchmark: On the Utility of Synthetic Data with Blender for Bare Supervised Learning and Downstream Domain Adaptation
Hui Tang
Kui Jia
OOD
34
13
0
16 Mar 2023
How Does In-Context Learning Help Prompt Tuning?
How Does In-Context Learning Help Prompt Tuning?
Simeng Sun
Yang Liu
Dan Iter
Chenguang Zhu
Mohit Iyyer
VLM
38
17
0
22 Feb 2023
Guiding Large Language Models via Directional Stimulus Prompting
Guiding Large Language Models via Directional Stimulus Prompting
Zekun Li
Baolin Peng
Pengcheng He
Michel Galley
Jianfeng Gao
Xi Yan
LLMAG
LRM
LM&Ro
40
94
0
22 Feb 2023
Zero-Label Prompt Selection
Zero-Label Prompt Selection
Chonghua Liao
Yanan Zheng
Zhilin Yang
VLM
8
6
0
09 Nov 2022
Unsupervised Non-transferable Text Classification
Unsupervised Non-transferable Text Classification
Guangtao Zeng
Wei Lu
30
6
0
23 Oct 2022
Overcoming Catastrophic Forgetting in Zero-Shot Cross-Lingual Generation
Overcoming Catastrophic Forgetting in Zero-Shot Cross-Lingual Generation
Tu Vu
Aditya Barua
Brian Lester
Daniel Cer
Mohit Iyyer
Noah Constant
CLL
21
64
0
25 May 2022
RLPrompt: Optimizing Discrete Text Prompts with Reinforcement Learning
RLPrompt: Optimizing Discrete Text Prompts with Reinforcement Learning
Mingkai Deng
Jianyu Wang
Cheng-Ping Hsieh
Yihan Wang
Han Guo
Tianmin Shu
Meng Song
Eric P. Xing
Zhiting Hu
27
319
0
25 May 2022
Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than
  In-Context Learning
Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning
Haokun Liu
Derek Tam
Mohammed Muqeeth
Jay Mohta
Tenghao Huang
Joey Tianyi Zhou
Colin Raffel
35
849
0
11 May 2022
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally
  Across Scales and Tasks
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Xiao Liu
Kaixuan Ji
Yicheng Fu
Weng Lam Tam
Zhengxiao Du
Zhilin Yang
Jie Tang
VLM
238
806
0
14 Oct 2021
Raise a Child in Large Language Model: Towards Effective and
  Generalizable Fine-tuning
Raise a Child in Large Language Model: Towards Effective and Generalizable Fine-tuning
Runxin Xu
Fuli Luo
Zhiyuan Zhang
Chuanqi Tan
Baobao Chang
Songfang Huang
Fei Huang
LRM
145
178
0
13 Sep 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,848
0
18 Apr 2021
Making Pre-trained Language Models Better Few-shot Learners
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
241
1,919
0
31 Dec 2020
1