ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2108.04106
  4. Cited By
Noisy Channel Language Model Prompting for Few-Shot Text Classification

Noisy Channel Language Model Prompting for Few-Shot Text Classification

9 August 2021
Sewon Min
Michael Lewis
Hannaneh Hajishirzi
Luke Zettlemoyer
    VLM
ArXivPDFHTML

Papers citing "Noisy Channel Language Model Prompting for Few-Shot Text Classification"

50 / 156 papers shown
Title
Prompt-Based Editing for Text Style Transfer
Prompt-Based Editing for Text Style Transfer
Guoqing Luo
Yu Tong Han
Lili Mou
Mauajama Firdaus
34
23
0
27 Jan 2023
Large Language Models Are Latent Variable Models: Explaining and Finding
  Good Demonstrations for In-Context Learning
Large Language Models Are Latent Variable Models: Explaining and Finding Good Demonstrations for In-Context Learning
Xinyi Wang
Wanrong Zhu
Michael Stephen Saxon
Mark Steyvers
William Yang Wang
BDL
53
92
0
27 Jan 2023
A Survey on In-context Learning
A Survey on In-context Learning
Qingxiu Dong
Lei Li
Damai Dai
Ce Zheng
Jingyuan Ma
...
Zhiyong Wu
Baobao Chang
Xu Sun
Lei Li
Zhifang Sui
ReLM
AIMat
20
462
0
31 Dec 2022
Not Just Pretty Pictures: Toward Interventional Data Augmentation Using
  Text-to-Image Generators
Not Just Pretty Pictures: Toward Interventional Data Augmentation Using Text-to-Image Generators
Jianhao Yuan
Francesco Pinto
Adam Davies
Philip Torr
DiffM
27
12
0
21 Dec 2022
Parallel Context Windows for Large Language Models
Parallel Context Windows for Large Language Models
Nir Ratner
Yoav Levine
Yonatan Belinkov
Ori Ram
Inbal Magar
Omri Abend
Ehud D. Karpas
Amnon Shashua
Kevin Leyton-Brown
Y. Shoham
RALM
22
69
0
21 Dec 2022
In-context Learning Distillation: Transferring Few-shot Learning Ability
  of Pre-trained Language Models
In-context Learning Distillation: Transferring Few-shot Learning Ability of Pre-trained Language Models
Yukun Huang
Yanda Chen
Zhou Yu
Kathleen McKeown
27
30
0
20 Dec 2022
Empowering Sentence Encoders with Prompting and Label Retrieval for
  Zero-shot Text Classification
Empowering Sentence Encoders with Prompting and Label Retrieval for Zero-shot Text Classification
Jimin Hong
Jungsoo Park
Daeyoung Kim
Seongjae Choi
Bokyung Son
Jaewoo Kang
24
3
0
20 Dec 2022
Z-ICL: Zero-Shot In-Context Learning with Pseudo-Demonstrations
Z-ICL: Zero-Shot In-Context Learning with Pseudo-Demonstrations
Xinxi Lyu
Sewon Min
Iz Beltagy
Luke Zettlemoyer
Hannaneh Hajishirzi
VLM
22
62
0
19 Dec 2022
Rethinking the Role of Scale for In-Context Learning: An
  Interpretability-based Case Study at 66 Billion Scale
Rethinking the Role of Scale for In-Context Learning: An Interpretability-based Case Study at 66 Billion Scale
Hritik Bansal
Karthik Gopalakrishnan
Saket Dingliwal
S. Bodapati
Katrin Kirchhoff
Dan Roth
LRM
30
48
0
18 Dec 2022
Searching for Effective Multilingual Fine-Tuning Methods: A Case Study
  in Summarization
Searching for Effective Multilingual Fine-Tuning Methods: A Case Study in Summarization
Yiwei Qin
Graham Neubig
Pengfei Liu
21
3
0
12 Dec 2022
Coder Reviewer Reranking for Code Generation
Coder Reviewer Reranking for Code Generation
Tianyi Zhang
Tao Yu
Tatsunori B. Hashimoto
M. Lewis
Wen-tau Yih
Daniel Fried
Sida I. Wang
44
92
0
29 Nov 2022
Complementary Explanations for Effective In-Context Learning
Complementary Explanations for Effective In-Context Learning
Xi Ye
Srini Iyer
Asli Celikyilmaz
Ves Stoyanov
Greg Durrett
Ramakanth Pasunuru
ReLM
LRM
37
86
0
25 Nov 2022
MACSum: Controllable Summarization with Mixed Attributes
MACSum: Controllable Summarization with Mixed Attributes
Yusen Zhang
Yang Liu
Ziyi Yang
Yuwei Fang
Yulong Chen
Dragomir R. Radev
Chenguang Zhu
Michael Zeng
Rui Zhang
34
15
0
09 Nov 2022
Tuning Language Models as Training Data Generators for
  Augmentation-Enhanced Few-Shot Learning
Tuning Language Models as Training Data Generators for Augmentation-Enhanced Few-Shot Learning
Yu Meng
Martin Michalski
Jiaxin Huang
Yu Zhang
Tarek F. Abdelzaher
Jiawei Han
VLM
56
46
0
06 Nov 2022
Controllable Factuality in Document-Grounded Dialog Systems Using a
  Noisy Channel Model
Controllable Factuality in Document-Grounded Dialog Systems Using a Noisy Channel Model
Nico Daheim
David Thulke
Christian Dugast
Hermann Ney
HILM
19
4
0
31 Oct 2022
RoMQA: A Benchmark for Robust, Multi-evidence, Multi-answer Question
  Answering
RoMQA: A Benchmark for Robust, Multi-evidence, Multi-answer Question Answering
Victor Zhong
Weijia Shi
Wen-tau Yih
Luke Zettlemoyer
17
19
0
25 Oct 2022
ProGen: Progressive Zero-shot Dataset Generation via In-context Feedback
ProGen: Progressive Zero-shot Dataset Generation via In-context Feedback
Jiacheng Ye
Jiahui Gao
Jiangtao Feng
Zhiyong Wu
Tao Yu
Lingpeng Kong
SyDa
VLM
78
72
0
22 Oct 2022
Robustness of Demonstration-based Learning Under Limited Data Scenario
Robustness of Demonstration-based Learning Under Limited Data Scenario
Hongxin Zhang
Yanzhe Zhang
Ruiyi Zhang
Diyi Yang
40
13
0
19 Oct 2022
Continued Pretraining for Better Zero- and Few-Shot Promptability
Continued Pretraining for Better Zero- and Few-Shot Promptability
Zhaofeng Wu
IV RobertL.Logan
Pete Walsh
Akshita Bhagia
Dirk Groeneveld
Sameer Singh
Iz Beltagy
VLM
41
12
0
19 Oct 2022
Few-Shot Anaphora Resolution in Scientific Protocols via Mixtures of
  In-Context Experts
Few-Shot Anaphora Resolution in Scientific Protocols via Mixtures of In-Context Experts
Nghia T. Le
Fan Bai
Alan Ritter
35
12
0
07 Oct 2022
Automatic Chain of Thought Prompting in Large Language Models
Automatic Chain of Thought Prompting in Large Language Models
Zhuosheng Zhang
Aston Zhang
Mu Li
Alexander J. Smola
ReLM
LRM
67
575
0
07 Oct 2022
State-of-the-art generalisation research in NLP: A taxonomy and review
State-of-the-art generalisation research in NLP: A taxonomy and review
Dieuwke Hupkes
Mario Giulianelli
Verna Dankers
Mikel Artetxe
Yanai Elazar
...
Leila Khalatbari
Maria Ryskina
Rita Frieske
Ryan Cotterell
Zhijing Jin
121
93
0
06 Oct 2022
Guess the Instruction! Flipped Learning Makes Language Models Stronger
  Zero-Shot Learners
Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners
Seonghyeon Ye
Doyoung Kim
Joel Jang
Joongbo Shin
Minjoon Seo
FedML
VLM
UQCV
LRM
19
25
0
06 Oct 2022
ThinkSum: Probabilistic reasoning over sets using large language models
ThinkSum: Probabilistic reasoning over sets using large language models
Batu Mehmet Ozturkler
Nikolay Malkin
Zhen Wang
Nebojsa Jojic
ReLM
LRM
49
22
0
04 Oct 2022
Cold-Start Data Selection for Few-shot Language Model Fine-tuning: A
  Prompt-Based Uncertainty Propagation Approach
Cold-Start Data Selection for Few-shot Language Model Fine-tuning: A Prompt-Based Uncertainty Propagation Approach
Yue Yu
Rongzhi Zhang
Ran Xu
Jieyu Zhang
Jiaming Shen
Chao Zhang
53
21
0
15 Sep 2022
Selective Annotation Makes Language Models Better Few-Shot Learners
Selective Annotation Makes Language Models Better Few-Shot Learners
Hongjin Su
Jungo Kasai
Chen Henry Wu
Weijia Shi
Tianlu Wang
...
Rui Zhang
Mari Ostendorf
Luke Zettlemoyer
Noah A. Smith
Tao Yu
25
244
0
05 Sep 2022
What Can Transformers Learn In-Context? A Case Study of Simple Function
  Classes
What Can Transformers Learn In-Context? A Case Study of Simple Function Classes
Shivam Garg
Dimitris Tsipras
Percy Liang
Gregory Valiant
29
451
0
01 Aug 2022
No More Fine-Tuning? An Experimental Evaluation of Prompt Tuning in Code
  Intelligence
No More Fine-Tuning? An Experimental Evaluation of Prompt Tuning in Code Intelligence
Chaozheng Wang
Yuanhang Yang
Cuiyun Gao
Yun Peng
Hongyu Zhang
Michael R. Lyu
AAML
67
134
0
24 Jul 2022
Emergent Abilities of Large Language Models
Emergent Abilities of Large Language Models
Jason W. Wei
Yi Tay
Rishi Bommasani
Colin Raffel
Barret Zoph
...
Tatsunori Hashimoto
Oriol Vinyals
Percy Liang
J. Dean
W. Fedus
ELM
ReLM
LRM
66
2,344
0
15 Jun 2022
Offline RL for Natural Language Generation with Implicit Language Q
  Learning
Offline RL for Natural Language Generation with Implicit Language Q Learning
Charles Burton Snell
Ilya Kostrikov
Yi Su
Mengjiao Yang
Sergey Levine
OffRL
128
102
0
05 Jun 2022
Few-shot Subgoal Planning with Language Models
Few-shot Subgoal Planning with Language Models
Lajanugen Logeswaran
Yao Fu
Moontae Lee
Honglak Lee
LRM
39
26
0
28 May 2022
kNN-Prompt: Nearest Neighbor Zero-Shot Inference
kNN-Prompt: Nearest Neighbor Zero-Shot Inference
Weijia Shi
Julian Michael
Suchin Gururangan
Luke Zettlemoyer
RALM
VLM
23
32
0
27 May 2022
Ground-Truth Labels Matter: A Deeper Look into Input-Label
  Demonstrations
Ground-Truth Labels Matter: A Deeper Look into Input-Label Demonstrations
Kang Min Yoo
Junyeob Kim
Hyuhng Joon Kim
Hyunsoo Cho
Hwiyeol Jo
Sang-Woo Lee
Sang-goo Lee
Taeuk Kim
31
123
0
25 May 2022
Gradient-Based Constrained Sampling from Language Models
Gradient-Based Constrained Sampling from Language Models
Sachin Kumar
Biswajit Paria
Yulia Tsvetkov
BDL
30
53
0
25 May 2022
RLPrompt: Optimizing Discrete Text Prompts with Reinforcement Learning
RLPrompt: Optimizing Discrete Text Prompts with Reinforcement Learning
Mingkai Deng
Jianyu Wang
Cheng-Ping Hsieh
Yihan Wang
Han Guo
Tianmin Shu
Meng Song
Eric P. Xing
Zhiting Hu
27
319
0
25 May 2022
Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than
  In-Context Learning
Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning
Haokun Liu
Derek Tam
Mohammed Muqeeth
Jay Mohta
Tenghao Huang
Joey Tianyi Zhou
Colin Raffel
38
849
0
11 May 2022
Natural Language to Code Translation with Execution
Natural Language to Code Translation with Execution
Freda Shi
Daniel Fried
Marjan Ghazvininejad
Luke Zettlemoyer
Sida I. Wang
36
124
0
25 Apr 2022
PERFECT: Prompt-free and Efficient Few-shot Learning with Language
  Models
PERFECT: Prompt-free and Efficient Few-shot Learning with Language Models
Rabeeh Karimi Mahabadi
Luke Zettlemoyer
James Henderson
Marzieh Saeidi
Lambert Mathias
Ves Stoyanov
Majid Yazdani
VLM
34
69
0
03 Apr 2022
Making Pre-trained Language Models End-to-end Few-shot Learners with
  Contrastive Prompt Tuning
Making Pre-trained Language Models End-to-end Few-shot Learners with Contrastive Prompt Tuning
Ziyun Xu
Chengyu Wang
Minghui Qiu
Fuli Luo
Runxin Xu
Songfang Huang
Jun Huang
VLM
38
31
0
01 Apr 2022
Pre-trained Token-replaced Detection Model as Few-shot Learner
Pre-trained Token-replaced Detection Model as Few-shot Learner
Zicheng Li
Shoushan Li
Guodong Zhou
35
8
0
07 Mar 2022
Rethinking the Role of Demonstrations: What Makes In-Context Learning
  Work?
Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?
Sewon Min
Xinxi Lyu
Ari Holtzman
Mikel Artetxe
M. Lewis
Hannaneh Hajishirzi
Luke Zettlemoyer
LLMAG
LRM
48
1,400
0
25 Feb 2022
Semantic-Oriented Unlabeled Priming for Large-Scale Language Models
Semantic-Oriented Unlabeled Priming for Large-Scale Language Models
Yanchen Liu
Timo Schick
Hinrich Schütze
VLM
28
15
0
12 Feb 2022
AdaPrompt: Adaptive Model Training for Prompt-based NLP
AdaPrompt: Adaptive Model Training for Prompt-based NLP
Yulong Chen
Yang Liu
Li Dong
Shuohang Wang
Chenguang Zhu
Michael Zeng
Yue Zhang
VLM
27
45
0
10 Feb 2022
Describing Differences between Text Distributions with Natural Language
Describing Differences between Text Distributions with Natural Language
Ruiqi Zhong
Charles Burton Snell
Dan Klein
Jacob Steinhardt
VLM
132
42
0
28 Jan 2022
Adapting Document-Grounded Dialog Systems to Spoken Conversations using
  Data Augmentation and a Noisy Channel Model
Adapting Document-Grounded Dialog Systems to Spoken Conversations using Data Augmentation and a Noisy Channel Model
David Thulke
Nico Daheim
Christian Dugast
Hermann Ney
3DGS
19
4
0
16 Dec 2021
Learning To Retrieve Prompts for In-Context Learning
Learning To Retrieve Prompts for In-Context Learning
Ohad Rubin
Jonathan Herzig
Jonathan Berant
VPVLM
RALM
14
666
0
16 Dec 2021
Prompt Waywardness: The Curious Case of Discretized Interpretation of
  Continuous Prompts
Prompt Waywardness: The Curious Case of Discretized Interpretation of Continuous Prompts
Daniel Khashabi
Xinxi Lyu
Sewon Min
Lianhui Qin
Kyle Richardson
...
Hannaneh Hajishirzi
Tushar Khot
Ashish Sabharwal
Sameer Singh
Yejin Choi
26
75
0
15 Dec 2021
True Few-Shot Learning with Prompts -- A Real-World Perspective
True Few-Shot Learning with Prompts -- A Real-World Perspective
Timo Schick
Hinrich Schütze
VLM
24
64
0
26 Nov 2021
MetaICL: Learning to Learn In Context
MetaICL: Learning to Learn In Context
Sewon Min
M. Lewis
Luke Zettlemoyer
Hannaneh Hajishirzi
LRM
58
467
0
29 Oct 2021
Coherence boosting: When your pretrained language model is not paying
  enough attention
Coherence boosting: When your pretrained language model is not paying enough attention
Nikolay Malkin
Zhen Wang
Nebojsa Jojic
RALM
21
35
0
15 Oct 2021
Previous
1234
Next