ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2104.08773
  4. Cited By
Cross-Task Generalization via Natural Language Crowdsourcing
  Instructions

Cross-Task Generalization via Natural Language Crowdsourcing Instructions

18 April 2021
Swaroop Mishra
Daniel Khashabi
Chitta Baral
Hannaneh Hajishirzi
    LRM
ArXivPDFHTML

Papers citing "Cross-Task Generalization via Natural Language Crowdsourcing Instructions"

50 / 562 papers shown
Title
Guess the Instruction! Flipped Learning Makes Language Models Stronger
  Zero-Shot Learners
Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners
Seonghyeon Ye
Doyoung Kim
Joel Jang
Joongbo Shin
Minjoon Seo
FedML
VLM
UQCV
LRM
19
25
0
06 Oct 2022
Learning by Distilling Context
Learning by Distilling Context
Charles Burton Snell
Dan Klein
Ruiqi Zhong
ReLM
LRM
168
44
0
30 Sep 2022
News Summarization and Evaluation in the Era of GPT-3
News Summarization and Evaluation in the Era of GPT-3
Tanya Goyal
Junyi Jessy Li
Greg Durrett
ELM
31
387
0
26 Sep 2022
Best Prompts for Text-to-Image Models and How to Find Them
Best Prompts for Text-to-Image Models and How to Find Them
Nikita Pavlichenko
Dmitry Ustalov
DiffM
22
58
0
23 Sep 2022
Learn to Explain: Multimodal Reasoning via Thought Chains for Science
  Question Answering
Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering
Pan Lu
Swaroop Mishra
Tony Xia
Liang Qiu
Kai-Wei Chang
Song-Chun Zhu
Oyvind Tafjord
Peter Clark
Ashwin Kalyan
ELM
ReLM
LRM
211
1,124
0
20 Sep 2022
Selective Annotation Makes Language Models Better Few-Shot Learners
Selective Annotation Makes Language Models Better Few-Shot Learners
Hongjin Su
Jungo Kasai
Chen Henry Wu
Weijia Shi
Tianlu Wang
...
Rui Zhang
Mari Ostendorf
Luke Zettlemoyer
Noah A. Smith
Tao Yu
31
244
0
05 Sep 2022
HELP ME THINK: A Simple Prompting Strategy for Non-experts to Create
  Customized Content with Models
HELP ME THINK: A Simple Prompting Strategy for Non-experts to Create Customized Content with Models
Swaroop Mishra
E. Nouri
LRM
41
25
0
17 Aug 2022
Few-shot Adaptation Works with UnpredicTable Data
Few-shot Adaptation Works with UnpredicTable Data
Jun Shern Chan
Michael Pieler
Jonathan Jao
Jérémy Scheurer
Ethan Perez
31
5
0
01 Aug 2022
Language models show human-like content effects on reasoning tasks
Language models show human-like content effects on reasoning tasks
Ishita Dasgupta
Andrew Kyle Lampinen
Stephanie C. Y. Chan
Hannah R. Sheahan
Antonia Creswell
D. Kumaran
James L. McClelland
Felix Hill
ReLM
LRM
30
181
0
14 Jul 2022
BioTABQA: Instruction Learning for Biomedical Table Question Answering
BioTABQA: Instruction Learning for Biomedical Table Question Answering
Man Luo
S. Saxena
Swaroop Mishra
Mihir Parmar
Chitta Baral
LMTD
157
15
0
06 Jul 2022
MVP: Multi-task Supervised Pre-training for Natural Language Generation
MVP: Multi-task Supervised Pre-training for Natural Language Generation
Tianyi Tang
Junyi Li
Wayne Xin Zhao
Ji-Rong Wen
46
24
0
24 Jun 2022
Neural Retriever and Go Beyond: A Thesis Proposal
Neural Retriever and Go Beyond: A Thesis Proposal
Man Luo
35
1
0
31 May 2022
Eliciting and Understanding Cross-Task Skills with Task-Level
  Mixture-of-Experts
Eliciting and Understanding Cross-Task Skills with Task-Level Mixture-of-Experts
Qinyuan Ye
Juan Zha
Xiang Ren
MoE
18
12
0
25 May 2022
RLPrompt: Optimizing Discrete Text Prompts with Reinforcement Learning
RLPrompt: Optimizing Discrete Text Prompts with Reinforcement Learning
Mingkai Deng
Jianyu Wang
Cheng-Ping Hsieh
Yihan Wang
Han Guo
Tianmin Shu
Meng Song
Eric Xing
Zhiting Hu
27
322
0
25 May 2022
Is a Question Decomposition Unit All We Need?
Is a Question Decomposition Unit All We Need?
Pruthvi H. Patel
Swaroop Mishra
Mihir Parmar
Chitta Baral
ReLM
158
51
0
25 May 2022
FLUTE: Figurative Language Understanding through Textual Explanations
FLUTE: Figurative Language Understanding through Textual Explanations
Tuhin Chakrabarty
Arkadiy Saakyan
Debanjan Ghosh
Smaranda Muresan
54
67
0
24 May 2022
Fine-tuned Language Models are Continual Learners
Fine-tuned Language Models are Continual Learners
Thomas Scialom
Tuhin Chakrabarty
Smaranda Muresan
CLL
LRM
145
117
0
24 May 2022
ATTEMPT: Parameter-Efficient Multi-task Tuning via Attentional Mixtures
  of Soft Prompts
ATTEMPT: Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts
Akari Asai
Mohammadreza Salehi
Matthew E. Peters
Hannaneh Hajishirzi
130
100
0
24 May 2022
Instruction Induction: From Few Examples to Natural Language Task
  Descriptions
Instruction Induction: From Few Examples to Natural Language Task Descriptions
Or Honovich
Uri Shaham
Samuel R. Bowman
Omer Levy
ELM
LRM
120
137
0
22 May 2022
Language Models in the Loop: Incorporating Prompting into Weak
  Supervision
Language Models in the Loop: Incorporating Prompting into Weak Supervision
Ryan Smith
Jason Alan Fries
Braden Hancock
Stephen H. Bach
53
53
0
04 May 2022
Improving In-Context Few-Shot Learning via Self-Supervised Training
Improving In-Context Few-Shot Learning via Self-Supervised Training
Mingda Chen
Jingfei Du
Ramakanth Pasunuru
Todor Mihaylov
Srini Iyer
Ves Stoyanov
Zornitsa Kozareva
SSL
AI4MH
38
64
0
03 May 2022
Textual Entailment for Event Argument Extraction: Zero- and Few-Shot
  with Multi-Source Learning
Textual Entailment for Event Argument Extraction: Zero- and Few-Shot with Multi-Source Learning
Oscar Sainz
Itziar Gonzalez-Dios
Oier López de Lacalle
Bonan Min
Eneko Agirre
31
49
0
03 May 2022
Don't Blame the Annotator: Bias Already Starts in the Annotation
  Instructions
Don't Blame the Annotator: Bias Already Starts in the Annotation Instructions
Mihir Parmar
Swaroop Mishra
Mor Geva
Chitta Baral
36
55
0
01 May 2022
What Makes Instruction Learning Hard? An Investigation and a New
  Challenge in a Synthetic Environment
What Makes Instruction Learning Hard? An Investigation and a New Challenge in a Synthetic Environment
Matthew Finlayson
Kyle Richardson
Ashish Sabharwal
Peter Clark
30
12
0
19 Apr 2022
Unsupervised Cross-Task Generalization via Retrieval Augmentation
Unsupervised Cross-Task Generalization via Retrieval Augmentation
Bill Yuchen Lin
Kangmin Tan
Chris Miller
Beiwen Tian
Xiang Ren
LRM
RALM
27
48
0
17 Apr 2022
Super-NaturalInstructions: Generalization via Declarative Instructions
  on 1600+ NLP Tasks
Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks
Yizhong Wang
Swaroop Mishra
Pegah Alipoormolabashi
Yeganeh Kordi
Amirreza Mirzaei
...
Chitta Baral
Yejin Choi
Noah A. Smith
Hannaneh Hajishirzi
Daniel Khashabi
ELM
59
790
0
16 Apr 2022
In-BoXBART: Get Instructions into Biomedical Multi-Task Learning
In-BoXBART: Get Instructions into Biomedical Multi-Task Learning
Mihir Parmar
Swaroop Mishra
Mirali Purohit
Man Luo
M. H. Murad
Chitta Baral
28
22
0
15 Apr 2022
CLUES: A Benchmark for Learning Classifiers using Natural Language
  Explanations
CLUES: A Benchmark for Learning Classifiers using Natural Language Explanations
Rakesh R Menon
Sayan Ghosh
Shashank Srivastava
LRM
ELM
29
9
0
14 Apr 2022
Can language models learn from explanations in context?
Can language models learn from explanations in context?
Andrew Kyle Lampinen
Ishita Dasgupta
Stephanie C. Y. Chan
Kory Matthewson
Michael Henry Tessler
Antonia Creswell
James L. McClelland
Jane X. Wang
Felix Hill
LRM
ReLM
56
285
0
05 Apr 2022
PERFECT: Prompt-free and Efficient Few-shot Learning with Language
  Models
PERFECT: Prompt-free and Efficient Few-shot Learning with Language Models
Rabeeh Karimi Mahabadi
Luke Zettlemoyer
James Henderson
Marzieh Saeidi
Lambert Mathias
Ves Stoyanov
Majid Yazdani
VLM
34
70
0
03 Apr 2022
Evaluating Prompts Across Multiple Choice Tasks In a Zero-Shot Setting
Evaluating Prompts Across Multiple Choice Tasks In a Zero-Shot Setting
Gabriel Orlanski
LRM
27
2
0
29 Mar 2022
How Many Data Samples is an Additional Instruction Worth?
How Many Data Samples is an Additional Instruction Worth?
Ravsehaj Singh Puri
Swaroop Mishra
Mihir Parmar
Chitta Baral
25
17
0
17 Mar 2022
Less is More: Summary of Long Instructions is Better for Program
  Synthesis
Less is More: Summary of Long Instructions is Better for Program Synthesis
Kirby Kuznia
Swaroop Mishra
Mihir Parmar
Chitta Baral
AIMat
28
22
0
16 Mar 2022
ConTinTin: Continual Learning from Task Instructions
ConTinTin: Continual Learning from Task Instructions
Wenpeng Yin
Jia Li
Caiming Xiong
CLL
29
29
0
16 Mar 2022
Choose Your QA Model Wisely: A Systematic Study of Generative and
  Extractive Readers for Question Answering
Choose Your QA Model Wisely: A Systematic Study of Generative and Extractive Readers for Question Answering
Man Luo
Kazuma Hashimoto
Semih Yavuz
Zhiwei Liu
Chitta Baral
Yingbo Zhou
29
21
0
14 Mar 2022
GrIPS: Gradient-free, Edit-based Instruction Search for Prompting Large
  Language Models
GrIPS: Gradient-free, Edit-based Instruction Search for Prompting Large Language Models
Archiki Prasad
Peter Hase
Xiang Zhou
Joey Tianyi Zhou
23
117
0
14 Mar 2022
PromptChainer: Chaining Large Language Model Prompts through Visual
  Programming
PromptChainer: Chaining Large Language Model Prompts through Visual Programming
Tongshuang Wu
Ellen Jiang
Aaron Donsbach
J. Gray
A. Molina
Michael Terry
Carrie J. Cai
LLMAG
LRM
19
207
0
13 Mar 2022
One-Shot Learning from a Demonstration with Hierarchical Latent Language
One-Shot Learning from a Demonstration with Hierarchical Latent Language
Nathaniel Weir
Xingdi Yuan
Marc-Alexandre Côté
Matthew J. Hausknecht
Romain Laroche
Ida Momennejad
H. V. Seijen
Benjamin Van Durme
BDL
27
6
0
09 Mar 2022
InstructionNER: A Multi-Task Instruction-Based Generative Framework for
  Few-shot NER
InstructionNER: A Multi-Task Instruction-Based Generative Framework for Few-shot NER
Liwen Wang
Rumei Li
Yang Yan
Yuanmeng Yan
Sirui Wang
Wei Wu
Weiran Xu
24
51
0
08 Mar 2022
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
369
12,081
0
04 Mar 2022
Rethinking the Role of Demonstrations: What Makes In-Context Learning
  Work?
Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?
Sewon Min
Xinxi Lyu
Ari Holtzman
Mikel Artetxe
M. Lewis
Hannaneh Hajishirzi
Luke Zettlemoyer
LLMAG
LRM
76
1,403
0
25 Feb 2022
UnifiedQA-v2: Stronger Generalization via Broader Cross-Format Training
UnifiedQA-v2: Stronger Generalization via Broader Cross-Format Training
Daniel Khashabi
Yeganeh Kordi
Hannaneh Hajishirzi
25
66
0
23 Feb 2022
Generating Training Data with Language Models: Towards Zero-Shot
  Language Understanding
Generating Training Data with Language Models: Towards Zero-Shot Language Understanding
Yu Meng
Jiaxin Huang
Yu Zhang
Jiawei Han
SyDa
32
229
0
09 Feb 2022
PromptSource: An Integrated Development Environment and Repository for
  Natural Language Prompts
PromptSource: An Integrated Development Environment and Repository for Natural Language Prompts
Stephen H. Bach
Victor Sanh
Zheng-Xin Yong
Albert Webson
Colin Raffel
...
Khalid Almubarak
Xiangru Tang
Dragomir R. Radev
Mike Tian-Jian Jiang
Alexander M. Rush
VLM
225
340
0
02 Feb 2022
Describing Differences between Text Distributions with Natural Language
Describing Differences between Text Distributions with Natural Language
Ruiqi Zhong
Charles Burton Snell
Dan Klein
Jacob Steinhardt
VLM
132
42
0
28 Jan 2022
Description-Driven Task-Oriented Dialog Modeling
Description-Driven Task-Oriented Dialog Modeling
Jeffrey Zhao
Raghav Gupta
Yuan Cao
Dian Yu
Mingqiu Wang
Harrison Lee
Abhinav Rastogi
Izhak Shafran
Yonghui Wu
48
64
0
21 Jan 2022
ZeroPrompt: Scaling Prompt-Based Pretraining to 1,000 Tasks Improves
  Zero-Shot Generalization
ZeroPrompt: Scaling Prompt-Based Pretraining to 1,000 Tasks Improves Zero-Shot Generalization
Hanwei Xu
Yujun Chen
Yulun Du
Nan Shao
Yanggang Wang
Haiyu Li
Zhilin Yang
VLM
LRM
AI4CE
36
69
0
18 Jan 2022
Few-shot Learning with Multilingual Language Models
Few-shot Learning with Multilingual Language Models
Xi Lin
Todor Mihaylov
Mikel Artetxe
Tianlu Wang
Shuohui Chen
...
Luke Zettlemoyer
Zornitsa Kozareva
Mona T. Diab
Ves Stoyanov
Xian Li
BDL
ELM
LRM
64
286
0
20 Dec 2021
Prompt Waywardness: The Curious Case of Discretized Interpretation of
  Continuous Prompts
Prompt Waywardness: The Curious Case of Discretized Interpretation of Continuous Prompts
Daniel Khashabi
Xinxi Lyu
Sewon Min
Lianhui Qin
Kyle Richardson
...
Hannaneh Hajishirzi
Tushar Khot
Ashish Sabharwal
Sameer Singh
Yejin Choi
26
75
0
15 Dec 2021
True Few-Shot Learning with Prompts -- A Real-World Perspective
True Few-Shot Learning with Prompts -- A Real-World Perspective
Timo Schick
Hinrich Schütze
VLM
27
64
0
26 Nov 2021
Previous
123...101112
Next