ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.05638
  4. Cited By
Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than
  In-Context Learning

Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning

11 May 2022
Haokun Liu
Derek Tam
Mohammed Muqeeth
Jay Mohta
Tenghao Huang
Joey Tianyi Zhou
Colin Raffel
ArXivPDFHTML

Papers citing "Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning"

50 / 162 papers shown
Title
LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models
LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models
Yukang Chen
Shengju Qian
Haotian Tang
Xin Lai
Zhijian Liu
Song Han
Jiaya Jia
53
152
0
21 Sep 2023
SeqGPT: An Out-of-the-box Large Language Model for Open Domain Sequence
  Understanding
SeqGPT: An Out-of-the-box Large Language Model for Open Domain Sequence Understanding
Tianyu Yu
Chengyue Jiang
Chao Lou
Shen Huang
Xiaobin Wang
...
Haitao Zheng
Ningyu Zhang
Pengjun Xie
Fei Huang
Yong-jia Jiang
LRM
59
16
0
21 Aug 2023
Investigating the Learning Behaviour of In-context Learning: A
  Comparison with Supervised Learning
Investigating the Learning Behaviour of In-context Learning: A Comparison with Supervised Learning
Xindi Wang
Yufei Wang
Can Xu
Xiubo Geng
Bowen Zhang
Chongyang Tao
Frank Rudzicz
Robert E. Mercer
Daxin Jiang
33
11
0
28 Jul 2023
PASTA: Pretrained Action-State Transformer Agents
PASTA: Pretrained Action-State Transformer Agents
Raphael Boige
Yannis Flet-Berliac
Arthur Flajolet
Guillaume Richard
Thomas Pierrot
LM&Ro
OffRL
40
5
0
20 Jul 2023
Unsupervised Calibration through Prior Adaptation for Text
  Classification using Large Language Models
Unsupervised Calibration through Prior Adaptation for Text Classification using Large Language Models
Lautaro Estienne
Luciana Ferrer
Matías Vera
Pablo Piantanida
VLM
34
1
0
13 Jul 2023
Parameter-efficient is not sufficient: Exploring Parameter, Memory, and
  Time Efficient Adapter Tuning for Dense Predictions
Parameter-efficient is not sufficient: Exploring Parameter, Memory, and Time Efficient Adapter Tuning for Dense Predictions
Dongshuo Yin
Xueting Han
Bin Li
Hao Feng
Jinghua Bai
VPVLM
36
18
0
16 Jun 2023
Prompt to be Consistent is Better than Self-Consistent? Few-Shot and
  Zero-Shot Fact Verification with Pre-trained Language Models
Prompt to be Consistent is Better than Self-Consistent? Few-Shot and Zero-Shot Fact Verification with Pre-trained Language Models
Fengzhu Zeng
Wei Gao
23
5
0
05 Jun 2023
Make Pre-trained Model Reversible: From Parameter to Memory Efficient
  Fine-Tuning
Make Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning
Baohao Liao
Shaomu Tan
Christof Monz
KELM
23
29
0
01 Jun 2023
Parameter-Efficient Fine-Tuning without Introducing New Latency
Parameter-Efficient Fine-Tuning without Introducing New Latency
Baohao Liao
Yan Meng
Christof Monz
24
49
0
26 May 2023
Estimating class separability of text embeddings with persistent
  homology
Estimating class separability of text embeddings with persistent homology
Kostis Gourgoulias
Najah F. Ghalyan
Maxime Labonne
Yash Satsangi
Sean J. Moran
Joseph Sabelja
38
0
0
24 May 2023
VIP5: Towards Multimodal Foundation Models for Recommendation
VIP5: Towards Multimodal Foundation Models for Recommendation
Shijie Geng
Juntao Tan
Shuchang Liu
Zuohui Fu
Yongfeng Zhang
32
70
0
23 May 2023
SPARSEFIT: Few-shot Prompting with Sparse Fine-tuning for Jointly
  Generating Predictions and Natural Language Explanations
SPARSEFIT: Few-shot Prompting with Sparse Fine-tuning for Jointly Generating Predictions and Natural Language Explanations
Jesus Solano
Oana-Maria Camburu
Pasquale Minervini
20
1
0
22 May 2023
Automated Few-shot Classification with Instruction-Finetuned Language
  Models
Automated Few-shot Classification with Instruction-Finetuned Language Models
Rami Aly
Xingjian Shi
Kaixiang Lin
Aston Zhang
A. Wilson
38
9
0
21 May 2023
CoEdIT: Text Editing by Task-Specific Instruction Tuning
CoEdIT: Text Editing by Task-Specific Instruction Tuning
Vipul Raheja
Dhruv Kumar
Ryan Koo
Dongyeop Kang
ALM
23
56
0
17 May 2023
Improving Small Language Models on PubMedQA via Generative Data
  Augmentation
Improving Small Language Models on PubMedQA via Generative Data Augmentation
Zhen Guo
Peiqi Wang
Yanwei Wang
Shangdi Yu
LM&MA
MedIm
18
10
0
12 May 2023
A Comprehensive Analysis of Adapter Efficiency
A Comprehensive Analysis of Adapter Efficiency
Nandini Mundra
Sumanth Doddapaneni
Raj Dabre
Anoop Kunchukuttan
Ratish Puduppully
Mitesh M. Khapra
26
10
0
12 May 2023
PEFT-Ref: A Modular Reference Architecture and Typology for
  Parameter-Efficient Finetuning Techniques
PEFT-Ref: A Modular Reference Architecture and Typology for Parameter-Efficient Finetuning Techniques
Mohammed Sabry
Anya Belz
38
8
0
24 Apr 2023
ART: Automatic multi-step reasoning and tool-use for large language
  models
ART: Automatic multi-step reasoning and tool-use for large language models
Bhargavi Paranjape
Scott M. Lundberg
Sameer Singh
Hannaneh Hajishirzi
Luke Zettlemoyer
Marco Tulio Ribeiro
KELM
ReLM
LRM
23
142
0
16 Mar 2023
Simfluence: Modeling the Influence of Individual Training Examples by
  Simulating Training Runs
Simfluence: Modeling the Influence of Individual Training Examples by Simulating Training Runs
Kelvin Guu
Albert Webson
Ellie Pavlick
Lucas Dixon
Ian Tenney
Tolga Bolukbasi
TDI
70
33
0
14 Mar 2023
Towards Zero-Shot Functional Compositionality of Language Models
Towards Zero-Shot Functional Compositionality of Language Models
Hangyeol Yu
Myeongho Jeong
Jamin Shin
Hyeongdon Moon
Juneyoung Park
Seungtaek Choi
37
1
0
06 Mar 2023
Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning
Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning
Zhen Wang
Yikang Shen
Leonid Karlinsky
Rogerio Feris
Huan Sun
Yoon Kim
VLM
VPVLM
44
107
0
06 Mar 2023
Modular Deep Learning
Modular Deep Learning
Jonas Pfeiffer
Sebastian Ruder
Ivan Vulić
E. Ponti
MoMe
OOD
32
73
0
22 Feb 2023
How Does In-Context Learning Help Prompt Tuning?
How Does In-Context Learning Help Prompt Tuning?
Simeng Sun
Yang Liu
Dan Iter
Chenguang Zhu
Mohit Iyyer
VLM
38
17
0
22 Feb 2023
Few-shot Multimodal Multitask Multilingual Learning
Few-shot Multimodal Multitask Multilingual Learning
Aman Chadha
Vinija Jain
53
0
0
19 Feb 2023
Towards Agile Text Classifiers for Everyone
Towards Agile Text Classifiers for Everyone
Maximilian Mozes
Jessica Hoffmann
Katrin Tomanek
Muhamed Kouate
Nithum Thain
Ann Yuan
Tolga Bolukbasi
Lucas Dixon
49
13
0
13 Feb 2023
AutoPEFT: Automatic Configuration Search for Parameter-Efficient
  Fine-Tuning
AutoPEFT: Automatic Configuration Search for Parameter-Efficient Fine-Tuning
Han Zhou
Xingchen Wan
Ivan Vulić
Anna Korhonen
24
45
0
28 Jan 2023
Iterated Decomposition: Improving Science Q&A by Supervising Reasoning
  Processes
Iterated Decomposition: Improving Science Q&A by Supervising Reasoning Processes
Justin Reppert
Ben Rachbach
Charlie George
Luke Stebbing
Ju-Seung Byun
Maggie Appleton
Andreas Stuhlmuller
ReLM
LRM
40
17
0
04 Jan 2023
MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction
  Tuning
MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning
Zhiyang Xu
Ying Shen
Lifu Huang
MLLM
32
110
0
21 Dec 2022
KronA: Parameter Efficient Tuning with Kronecker Adapter
KronA: Parameter Efficient Tuning with Kronecker Adapter
Ali Edalati
Marzieh S. Tahaei
I. Kobyzev
V. Nia
J. Clark
Mehdi Rezagholizadeh
45
94
0
20 Dec 2022
(QA)$^2$: Question Answering with Questionable Assumptions
(QA)2^22: Question Answering with Questionable Assumptions
Najoung Kim
Phu Mon Htut
Sam Bowman
Jackson Petty
29
33
0
20 Dec 2022
BLOOM+1: Adding Language Support to BLOOM for Zero-Shot Prompting
BLOOM+1: Adding Language Support to BLOOM for Zero-Shot Prompting
Zheng-Xin Yong
Hailey Schoelkopf
Niklas Muennighoff
Alham Fikri Aji
David Ifeoluwa Adelani
...
Genta Indra Winata
Stella Biderman
Edward Raff
Dragomir R. Radev
Vassilina Nikoulina
CLL
VLM
AI4CE
LRM
35
81
0
19 Dec 2022
DiSTRICT: Dialogue State Tracking with Retriever Driven In-Context
  Tuning
DiSTRICT: Dialogue State Tracking with Retriever Driven In-Context Tuning
Praveen Venkateswaran
Evelyn Duesterwald
Vatche Isahagian
41
7
0
06 Dec 2022
Legal Prompt Engineering for Multilingual Legal Judgement Prediction
Legal Prompt Engineering for Multilingual Legal Judgement Prediction
Dietrich Trautmann
Alina Petrova
Frank Schilder
ELM
AILaw
41
74
0
05 Dec 2022
ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning
ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning
Shachar Don-Yehiya
Elad Venezian
Colin Raffel
Noam Slonim
Yoav Katz
Leshem Choshen
MoMe
28
52
0
02 Dec 2022
Data-Efficient Finetuning Using Cross-Task Nearest Neighbors
Data-Efficient Finetuning Using Cross-Task Nearest Neighbors
Hamish Ivison
Noah A. Smith
Hannaneh Hajishirzi
Pradeep Dasigi
33
19
0
01 Dec 2022
Prompting PaLM for Translation: Assessing Strategies and Performance
Prompting PaLM for Translation: Assessing Strategies and Performance
David Vilar
Markus Freitag
Colin Cherry
Jiaming Luo
Viresh Ratnakar
George F. Foster
LRM
27
154
0
16 Nov 2022
Tuning Language Models as Training Data Generators for
  Augmentation-Enhanced Few-Shot Learning
Tuning Language Models as Training Data Generators for Augmentation-Enhanced Few-Shot Learning
Yu Meng
Martin Michalski
Jiaxin Huang
Yu Zhang
Tarek F. Abdelzaher
Jiawei Han
VLM
56
47
0
06 Nov 2022
Continued Pretraining for Better Zero- and Few-Shot Promptability
Continued Pretraining for Better Zero- and Few-Shot Promptability
Zhaofeng Wu
IV RobertL.Logan
Pete Walsh
Akshita Bhagia
Dirk Groeneveld
Sameer Singh
Iz Beltagy
VLM
44
12
0
19 Oct 2022
Few-Shot Anaphora Resolution in Scientific Protocols via Mixtures of
  In-Context Experts
Few-Shot Anaphora Resolution in Scientific Protocols via Mixtures of In-Context Experts
Nghia T. Le
Fan Bai
Alan Ritter
37
12
0
07 Oct 2022
Automatic Chain of Thought Prompting in Large Language Models
Automatic Chain of Thought Prompting in Large Language Models
Zhuosheng Zhang
Aston Zhang
Mu Li
Alexander J. Smola
ReLM
LRM
67
581
0
07 Oct 2022
Efficiently Enhancing Zero-Shot Performance of Instruction Following
  Model via Retrieval of Soft Prompt
Efficiently Enhancing Zero-Shot Performance of Instruction Following Model via Retrieval of Soft Prompt
Seonghyeon Ye
Joel Jang
Doyoung Kim
Yongrae Jo
Minjoon Seo
VLM
39
2
0
06 Oct 2022
Guess the Instruction! Flipped Learning Makes Language Models Stronger
  Zero-Shot Learners
Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners
Seonghyeon Ye
Doyoung Kim
Joel Jang
Joongbo Shin
Minjoon Seo
FedML
VLM
UQCV
LRM
19
25
0
06 Oct 2022
OCD: Learning to Overfit with Conditional Diffusion Models
OCD: Learning to Overfit with Conditional Diffusion Models
Shahar Lutati
Lior Wolf
DiffM
22
8
0
02 Oct 2022
Bidirectional Language Models Are Also Few-shot Learners
Bidirectional Language Models Are Also Few-shot Learners
Ajay Patel
Bryan Li
Mohammad Sadegh Rasooli
Noah Constant
Colin Raffel
Chris Callison-Burch
LRM
70
45
0
29 Sep 2022
Efficient Few-Shot Learning Without Prompts
Efficient Few-Shot Learning Without Prompts
Lewis Tunstall
Nils Reimers
Unso Eun Seo Jo
Luke Bates
Daniel Korat
Moshe Wasserblat
Oren Pereg
VLM
36
182
0
22 Sep 2022
Generate rather than Retrieve: Large Language Models are Strong Context
  Generators
Generate rather than Retrieve: Large Language Models are Strong Context Generators
W. Yu
Dan Iter
Shuohang Wang
Yichong Xu
Mingxuan Ju
Soumya Sanyal
Chenguang Zhu
Michael Zeng
Meng Jiang
RALM
AIMat
235
322
0
21 Sep 2022
Efficient Methods for Natural Language Processing: A Survey
Efficient Methods for Natural Language Processing: A Survey
Marcos Vinícius Treviso
Ji-Ung Lee
Tianchu Ji
Betty van Aken
Qingqing Cao
...
Emma Strubell
Niranjan Balasubramanian
Leon Derczynski
Iryna Gurevych
Roy Schwartz
30
109
0
31 Aug 2022
Can Foundation Models Help Us Achieve Perfect Secrecy?
Can Foundation Models Help Us Achieve Perfect Secrecy?
Simran Arora
Christopher Ré
FedML
24
6
0
27 May 2022
Know Where You're Going: Meta-Learning for Parameter-Efficient
  Fine-Tuning
Know Where You're Going: Meta-Learning for Parameter-Efficient Fine-Tuning
Mozhdeh Gheini
Xuezhe Ma
Jonathan May
44
5
0
25 May 2022
ATTEMPT: Parameter-Efficient Multi-task Tuning via Attentional Mixtures
  of Soft Prompts
ATTEMPT: Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts
Akari Asai
Mohammadreza Salehi
Matthew E. Peters
Hannaneh Hajishirzi
130
100
0
24 May 2022
Previous
1234
Next