ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2104.04670
  4. Cited By
Adapting Language Models for Zero-shot Learning by Meta-tuning on
  Dataset and Prompt Collections
v1v2v3v4v5 (latest)

Adapting Language Models for Zero-shot Learning by Meta-tuning on Dataset and Prompt Collections

10 April 2021
Ruiqi Zhong
Kristy Lee
Zheng Zhang
Dan Klein
ArXiv (abs)PDFHTML

Papers citing "Adapting Language Models for Zero-shot Learning by Meta-tuning on Dataset and Prompt Collections"

22 / 122 papers shown
Title
Evaluating Prompts Across Multiple Choice Tasks In a Zero-Shot Setting
Evaluating Prompts Across Multiple Choice Tasks In a Zero-Shot Setting
Gabriel Orlanski
LRM
67
2
0
29 Mar 2022
ZS4IE: A toolkit for Zero-Shot Information Extraction with simple
  Verbalizations
ZS4IE: A toolkit for Zero-Shot Information Extraction with simple Verbalizations
Oscar Sainz
Haoling Qiu
Oier López de Lacalle
Eneko Agirre
Bonan Min
SyDa
73
12
0
25 Mar 2022
How Many Data Samples is an Additional Instruction Worth?
How Many Data Samples is an Additional Instruction Worth?
Ravsehaj Singh Puri
Swaroop Mishra
Mihir Parmar
Chitta Baral
77
17
0
17 Mar 2022
Speaker Information Can Guide Models to Better Inductive Biases: A Case
  Study On Predicting Code-Switching
Speaker Information Can Guide Models to Better Inductive Biases: A Case Study On Predicting Code-Switching
Alissa Ostapenko
S. Wintner
Melinda Fricke
Yulia Tsvetkov
86
5
0
16 Mar 2022
UnifiedQA-v2: Stronger Generalization via Broader Cross-Format Training
UnifiedQA-v2: Stronger Generalization via Broader Cross-Format Training
Daniel Khashabi
Yeganeh Kordi
Hannaneh Hajishirzi
102
67
0
23 Feb 2022
ZeroGen: Efficient Zero-shot Learning via Dataset Generation
ZeroGen: Efficient Zero-shot Learning via Dataset Generation
Jiacheng Ye
Jiahui Gao
Qintong Li
Hang Xu
Jiangtao Feng
Zhiyong Wu
Tao Yu
Lingpeng Kong
SyDa
134
221
0
16 Feb 2022
Generating Training Data with Language Models: Towards Zero-Shot
  Language Understanding
Generating Training Data with Language Models: Towards Zero-Shot Language Understanding
Yu Meng
Jiaxin Huang
Yu Zhang
Jiawei Han
SyDa
79
235
0
09 Feb 2022
Describing Differences between Text Distributions with Natural Language
Describing Differences between Text Distributions with Natural Language
Ruiqi Zhong
Charles Burton Snell
Dan Klein
Jacob Steinhardt
VLM
196
44
0
28 Jan 2022
ZeroPrompt: Scaling Prompt-Based Pretraining to 1,000 Tasks Improves
  Zero-Shot Generalization
ZeroPrompt: Scaling Prompt-Based Pretraining to 1,000 Tasks Improves Zero-Shot Generalization
Hanwei Xu
Yujun Chen
Yulun Du
Nan Shao
Yanggang Wang
Haiyu Li
Zhilin Yang
VLMLRMAI4CE
77
69
0
18 Jan 2022
UnifiedSKG: Unifying and Multi-Tasking Structured Knowledge Grounding
  with Text-to-Text Language Models
UnifiedSKG: Unifying and Multi-Tasking Structured Knowledge Grounding with Text-to-Text Language Models
Tianbao Xie
Chen Henry Wu
Peng Shi
Ruiqi Zhong
Torsten Scholak
...
Lingpeng Kong
Rui Zhang
Noah A. Smith
Luke Zettlemoyer
Tao Yu
LMTD
113
304
0
16 Jan 2022
Massive-scale Decoding for Text Generation using Lattices
Massive-scale Decoding for Text Generation using Lattices
Jiacheng Xu
Siddhartha Reddy Jonnalagadda
Greg Durrett
AI4CE
89
8
0
14 Dec 2021
NLP From Scratch Without Large-Scale Pretraining: A Simple and Efficient
  Framework
NLP From Scratch Without Large-Scale Pretraining: A Simple and Efficient Framework
Xingcheng Yao
Yanan Zheng
Xiaocong Yang
Zhilin Yang
86
45
0
07 Nov 2021
Recent Advances in Natural Language Processing via Large Pre-Trained
  Language Models: A Survey
Recent Advances in Natural Language Processing via Large Pre-Trained Language Models: A Survey
Bonan Min
Hayley L Ross
Elior Sulem
Amir Pouran Ben Veyseh
Thien Huu Nguyen
Oscar Sainz
Eneko Agirre
Ilana Heinz
Dan Roth
LM&MAVLMAI4CE
189
1,094
0
01 Nov 2021
MetaICL: Learning to Learn In Context
MetaICL: Learning to Learn In Context
Sewon Min
M. Lewis
Luke Zettlemoyer
Hannaneh Hajishirzi
LRM
248
492
0
29 Oct 2021
MEmoBERT: Pre-training Model with Prompt-based Learning for Multimodal
  Emotion Recognition
MEmoBERT: Pre-training Model with Prompt-based Learning for Multimodal Emotion Recognition
Jinming Zhao
Ruichen Li
Qin Jin
Xinchao Wang
Haizhou Li
49
25
0
27 Oct 2021
Meta-learning via Language Model In-context Tuning
Meta-learning via Language Model In-context Tuning
Yanda Chen
Ruiqi Zhong
Sheng Zha
George Karypis
He He
313
162
0
15 Oct 2021
Context-NER : Contextual Phrase Generation at Scale
Context-NER : Contextual Phrase Generation at Scale
Himanshu Gupta
Shreyas Verma
Santosh Mashetty
Swaroop Mishra
67
10
0
16 Sep 2021
PPT: Pre-trained Prompt Tuning for Few-shot Learning
PPT: Pre-trained Prompt Tuning for Few-shot Learning
Yuxian Gu
Xu Han
Zhiyuan Liu
Minlie Huang
VLM
146
419
0
09 Sep 2021
FLEX: Unifying Evaluation for Few-Shot NLP
FLEX: Unifying Evaluation for Few-Shot NLP
Jonathan Bragg
Arman Cohan
Kyle Lo
Iz Beltagy
270
108
0
15 Jul 2021
CrossFit: A Few-shot Learning Challenge for Cross-task Generalization in
  NLP
CrossFit: A Few-shot Learning Challenge for Cross-task Generalization in NLP
Qinyuan Ye
Bill Yuchen Lin
Xiang Ren
299
185
0
18 Apr 2021
Cross-Task Generalization via Natural Language Crowdsourcing
  Instructions
Cross-Task Generalization via Natural Language Crowdsourcing Instructions
Swaroop Mishra
Daniel Khashabi
Chitta Baral
Hannaneh Hajishirzi
LRM
184
756
0
18 Apr 2021
Surface Form Competition: Why the Highest Probability Answer Isn't
  Always Right
Surface Form Competition: Why the Highest Probability Answer Isn't Always Right
Ari Holtzman
Peter West
Vered Schwartz
Yejin Choi
Luke Zettlemoyer
LRM
195
239
0
16 Apr 2021
Previous
123