ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2109.01247
  4. Cited By
Do Prompt-Based Models Really Understand the Meaning of their Prompts?

Do Prompt-Based Models Really Understand the Meaning of their Prompts?

2 September 2021
Albert Webson
Ellie Pavlick
    LRM
ArXivPDFHTML

Papers citing "Do Prompt-Based Models Really Understand the Meaning of their Prompts?"

50 / 254 papers shown
Title
Large Language Models Are Human-Level Prompt Engineers
Large Language Models Are Human-Level Prompt Engineers
Yongchao Zhou
Andrei Ioan Muresanu
Ziwen Han
Keiran Paster
Silviu Pitis
Harris Chan
Jimmy Ba
ALM
LLMAG
21
829
0
03 Nov 2022
GPS: Genetic Prompt Search for Efficient Few-shot Learning
GPS: Genetic Prompt Search for Efficient Few-shot Learning
Hanwei Xu
Yujun Chen
Yulun Du
Nan Shao
Yanggang Wang
Haiyu Li
Zhilin Yang
VLM
14
28
0
31 Oct 2022
STPrompt: Semantic-guided and Task-driven prompts for Effective Few-shot
  Classification
STPrompt: Semantic-guided and Task-driven prompts for Effective Few-shot Classification
Jinta Weng
Yue Hu
Jing Qiu
Heyan Huan
VLM
13
0
0
29 Oct 2022
Can language models handle recursively nested grammatical structures? A
  case study on comparing models and humans
Can language models handle recursively nested grammatical structures? A case study on comparing models and humans
Andrew Kyle Lampinen
ReLM
ELM
27
35
0
27 Oct 2022
MemoNet: Memorizing All Cross Features' Representations Efficiently via
  Multi-Hash Codebook Network for CTR Prediction
MemoNet: Memorizing All Cross Features' Representations Efficiently via Multi-Hash Codebook Network for CTR Prediction
P. Zhang
Junlin Zhang
20
3
0
25 Oct 2022
Communication breakdown: On the low mutual intelligibility between human
  and neural captioning
Communication breakdown: On the low mutual intelligibility between human and neural captioning
Roberto Dessì
Eleonora Gualdoni
Francesca Franzon
Gemma Boleda
Marco Baroni
VLM
29
6
0
20 Oct 2022
TabLLM: Few-shot Classification of Tabular Data with Large Language
  Models
TabLLM: Few-shot Classification of Tabular Data with Large Language Models
S. Hegselmann
Alejandro Buendia
Hunter Lang
Monica Agrawal
Xiaoyi Jiang
David Sontag
LMTD
55
211
0
19 Oct 2022
Robustness of Demonstration-based Learning Under Limited Data Scenario
Robustness of Demonstration-based Learning Under Limited Data Scenario
Hongxin Zhang
Yanzhe Zhang
Ruiyi Zhang
Diyi Yang
40
13
0
19 Oct 2022
Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them
Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them
Mirac Suzgun
Nathan Scales
Nathanael Scharli
Sebastian Gehrmann
Yi Tay
...
Aakanksha Chowdhery
Quoc V. Le
Ed H. Chi
Denny Zhou
Jason W. Wei
ALM
ELM
LRM
ReLM
92
997
0
17 Oct 2022
Language Models Are Poor Learners of Directional Inference
Language Models Are Poor Learners of Directional Inference
Tianyi Li
Mohammad Javad Hosseini
Sabine Weber
Mark Steedman
23
10
0
10 Oct 2022
Automatic Chain of Thought Prompting in Large Language Models
Automatic Chain of Thought Prompting in Large Language Models
Zhuosheng Zhang
Aston Zhang
Mu Li
Alexander J. Smola
ReLM
LRM
67
575
0
07 Oct 2022
Achieving and Understanding Out-of-Distribution Generalization in
  Systematic Reasoning in Small-Scale Transformers
Achieving and Understanding Out-of-Distribution Generalization in Systematic Reasoning in Small-Scale Transformers
A. Nam
Mustafa Abdool
Trevor C. Maxfield
James L. McClelland
NAI
LRM
AI4CE
28
1
0
07 Oct 2022
Efficiently Enhancing Zero-Shot Performance of Instruction Following
  Model via Retrieval of Soft Prompt
Efficiently Enhancing Zero-Shot Performance of Instruction Following Model via Retrieval of Soft Prompt
Seonghyeon Ye
Joel Jang
Doyoung Kim
Yongrae Jo
Minjoon Seo
VLM
36
2
0
06 Oct 2022
Guess the Instruction! Flipped Learning Makes Language Models Stronger
  Zero-Shot Learners
Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners
Seonghyeon Ye
Doyoung Kim
Joel Jang
Joongbo Shin
Minjoon Seo
FedML
VLM
UQCV
LRM
19
25
0
06 Oct 2022
Explaining Patterns in Data with Language Models via Interpretable
  Autoprompting
Explaining Patterns in Data with Language Models via Interpretable Autoprompting
Chandan Singh
John X. Morris
J. Aneja
Alexander M. Rush
Jianfeng Gao
LRM
33
0
0
04 Oct 2022
ThinkSum: Probabilistic reasoning over sets using large language models
ThinkSum: Probabilistic reasoning over sets using large language models
Batu Mehmet Ozturkler
Nikolay Malkin
Zhen Wang
Nebojsa Jojic
ReLM
LRM
49
22
0
04 Oct 2022
Can Large Language Models Truly Understand Prompts? A Case Study with
  Negated Prompts
Can Large Language Models Truly Understand Prompts? A Case Study with Negated Prompts
Joel Jang
Seonghyeon Ye
Minjoon Seo
ELM
LRM
95
64
0
26 Sep 2022
Selecting Better Samples from Pre-trained LLMs: A Case Study on Question
  Generation
Selecting Better Samples from Pre-trained LLMs: A Case Study on Question Generation
Xingdi Yuan
Tong Wang
Yen-Hsiang Wang
Emery Fine
Rania Abdelghani
Pauline Lucas
Hélene Sauzéon
Pierre-Yves Oudeyer
30
29
0
22 Sep 2022
Shortcut Learning of Large Language Models in Natural Language
  Understanding
Shortcut Learning of Large Language Models in Natural Language Understanding
Mengnan Du
Fengxiang He
Na Zou
Dacheng Tao
Xia Hu
KELM
OffRL
34
84
0
25 Aug 2022
Interactive and Visual Prompt Engineering for Ad-hoc Task Adaptation
  with Large Language Models
Interactive and Visual Prompt Engineering for Ad-hoc Task Adaptation with Large Language Models
Hendrik Strobelt
Albert Webson
Victor Sanh
Benjamin Hoover
Johanna Beyer
Hanspeter Pfister
Alexander M. Rush
VLM
36
135
0
16 Aug 2022
Language models show human-like content effects on reasoning tasks
Language models show human-like content effects on reasoning tasks
Ishita Dasgupta
Andrew Kyle Lampinen
Stephanie C. Y. Chan
Hannah R. Sheahan
Antonia Creswell
D. Kumaran
James L. McClelland
Felix Hill
ReLM
LRM
30
181
0
14 Jul 2022
BioTABQA: Instruction Learning for Biomedical Table Question Answering
BioTABQA: Instruction Learning for Biomedical Table Question Answering
Man Luo
S. Saxena
Swaroop Mishra
Mihir Parmar
Chitta Baral
LMTD
157
15
0
06 Jul 2022
MVP: Multi-task Supervised Pre-training for Natural Language Generation
MVP: Multi-task Supervised Pre-training for Natural Language Generation
Tianyi Tang
Junyi Li
Wayne Xin Zhao
Ji-Rong Wen
43
24
0
24 Jun 2022
Using cognitive psychology to understand GPT-3
Using cognitive psychology to understand GPT-3
Marcel Binz
Eric Schulz
ELM
LLMAG
250
440
0
21 Jun 2022
InstructDial: Improving Zero and Few-shot Generalization in Dialogue
  through Instruction Tuning
InstructDial: Improving Zero and Few-shot Generalization in Dialogue through Instruction Tuning
Prakhar Gupta
Cathy Jiao
Yi-Ting Yeh
Shikib Mehri
M. Eskénazi
Jeffrey P. Bigham
ALM
41
47
0
25 May 2022
RLPrompt: Optimizing Discrete Text Prompts with Reinforcement Learning
RLPrompt: Optimizing Discrete Text Prompts with Reinforcement Learning
Mingkai Deng
Jianyu Wang
Cheng-Ping Hsieh
Yihan Wang
Han Guo
Tianmin Shu
Meng Song
Eric P. Xing
Zhiting Hu
27
319
0
25 May 2022
Large Language Models are Zero-Shot Reasoners
Large Language Models are Zero-Shot Reasoners
Takeshi Kojima
S. Gu
Machel Reid
Yutaka Matsuo
Yusuke Iwasawa
ReLM
LRM
328
4,077
0
24 May 2022
Improving Short Text Classification With Augmented Data Using GPT-3
Improving Short Text Classification With Augmented Data Using GPT-3
Salvador Balkus
Donghui Yan
33
33
0
23 May 2022
Instruction Induction: From Few Examples to Natural Language Task
  Descriptions
Instruction Induction: From Few Examples to Natural Language Task Descriptions
Or Honovich
Uri Shaham
Samuel R. Bowman
Omer Levy
ELM
LRM
120
136
0
22 May 2022
Can Foundation Models Wrangle Your Data?
Can Foundation Models Wrangle Your Data?
A. Narayan
Ines Chami
Laurel J. Orr
Simran Arora
Christopher Ré
LMTD
AI4CE
181
214
0
20 May 2022
Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than
  In-Context Learning
Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning
Haokun Liu
Derek Tam
Mohammed Muqeeth
Jay Mohta
Tenghao Huang
Joey Tianyi Zhou
Colin Raffel
38
849
0
11 May 2022
The Unreliability of Explanations in Few-shot Prompting for Textual
  Reasoning
The Unreliability of Explanations in Few-shot Prompting for Textual Reasoning
Xi Ye
Greg Durrett
ReLM
LRM
36
168
0
06 May 2022
Language Models in the Loop: Incorporating Prompting into Weak
  Supervision
Language Models in the Loop: Incorporating Prompting into Weak Supervision
Ryan Smith
Jason Alan Fries
Braden Hancock
Stephen H. Bach
50
53
0
04 May 2022
OPT: Open Pre-trained Transformer Language Models
OPT: Open Pre-trained Transformer Language Models
Susan Zhang
Stephen Roller
Naman Goyal
Mikel Artetxe
Moya Chen
...
Daniel Simig
Punit Singh Koura
Anjali Sridhar
Tianlu Wang
Luke Zettlemoyer
VLM
OSLM
AI4CE
59
3,488
0
02 May 2022
Data Distributional Properties Drive Emergent In-Context Learning in
  Transformers
Data Distributional Properties Drive Emergent In-Context Learning in Transformers
Stephanie C. Y. Chan
Adam Santoro
Andrew Kyle Lampinen
Jane X. Wang
Aaditya K. Singh
Pierre Harvey Richemond
J. Mcclelland
Felix Hill
58
244
0
22 Apr 2022
In-BoXBART: Get Instructions into Biomedical Multi-Task Learning
In-BoXBART: Get Instructions into Biomedical Multi-Task Learning
Mihir Parmar
Swaroop Mishra
Mirali Purohit
Man Luo
M. H. Murad
Chitta Baral
28
22
0
15 Apr 2022
Can language models learn from explanations in context?
Can language models learn from explanations in context?
Andrew Kyle Lampinen
Ishita Dasgupta
Stephanie C. Y. Chan
Kory Matthewson
Michael Henry Tessler
Antonia Creswell
James L. McClelland
Jane X. Wang
Felix Hill
LRM
ReLM
41
283
0
05 Apr 2022
PERFECT: Prompt-free and Efficient Few-shot Learning with Language
  Models
PERFECT: Prompt-free and Efficient Few-shot Learning with Language Models
Rabeeh Karimi Mahabadi
Luke Zettlemoyer
James Henderson
Marzieh Saeidi
Lambert Mathias
Ves Stoyanov
Majid Yazdani
VLM
31
69
0
03 Apr 2022
GrIPS: Gradient-free, Edit-based Instruction Search for Prompting Large
  Language Models
GrIPS: Gradient-free, Edit-based Instruction Search for Prompting Large Language Models
Archiki Prasad
Peter Hase
Xiang Zhou
Joey Tianyi Zhou
20
117
0
14 Mar 2022
Rethinking the Role of Demonstrations: What Makes In-Context Learning
  Work?
Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?
Sewon Min
Xinxi Lyu
Ari Holtzman
Mikel Artetxe
M. Lewis
Hannaneh Hajishirzi
Luke Zettlemoyer
LLMAG
LRM
40
1,400
0
25 Feb 2022
PromptSource: An Integrated Development Environment and Repository for
  Natural Language Prompts
PromptSource: An Integrated Development Environment and Repository for Natural Language Prompts
Stephen H. Bach
Victor Sanh
Zheng-Xin Yong
Albert Webson
Colin Raffel
...
Khalid Almubarak
Xiangru Tang
Dragomir R. Radev
Mike Tian-Jian Jiang
Alexander M. Rush
VLM
225
338
0
02 Feb 2022
Analyzing the Limits of Self-Supervision in Handling Bias in Language
Analyzing the Limits of Self-Supervision in Handling Bias in Language
Lisa Bauer
Karthik Gopalakrishnan
Spandana Gella
Yang Liu
Joey Tianyi Zhou
Dilek Z. Hakkani-Tür
ELM
22
1
0
16 Dec 2021
True Few-Shot Learning with Prompts -- A Real-World Perspective
True Few-Shot Learning with Prompts -- A Real-World Perspective
Timo Schick
Hinrich Schütze
VLM
21
64
0
26 Nov 2021
Recent Advances in Natural Language Processing via Large Pre-Trained
  Language Models: A Survey
Recent Advances in Natural Language Processing via Large Pre-Trained Language Models: A Survey
Bonan Min
Hayley L Ross
Elior Sulem
Amir Pouran Ben Veyseh
Thien Huu Nguyen
Oscar Sainz
Eneko Agirre
Ilana Heinz
Dan Roth
LM&MA
VLM
AI4CE
83
1,030
0
01 Nov 2021
Multitask Prompted Training Enables Zero-Shot Task Generalization
Multitask Prompted Training Enables Zero-Shot Task Generalization
Victor Sanh
Albert Webson
Colin Raffel
Stephen H. Bach
Lintang Sutawika
...
T. Bers
Stella Biderman
Leo Gao
Thomas Wolf
Alexander M. Rush
LRM
213
1,657
0
15 Oct 2021
Avoiding Inference Heuristics in Few-shot Prompt-based Finetuning
Avoiding Inference Heuristics in Few-shot Prompt-based Finetuning
Prasetya Ajie Utama
N. Moosavi
Victor Sanh
Iryna Gurevych
AAML
61
35
0
09 Sep 2021
FLEX: Unifying Evaluation for Few-Shot NLP
FLEX: Unifying Evaluation for Few-Shot NLP
Jonathan Bragg
Arman Cohan
Kyle Lo
Iz Beltagy
205
104
0
15 Jul 2021
Systematic human learning and generalization from a brief tutorial with
  explanatory feedback
Systematic human learning and generalization from a brief tutorial with explanatory feedback
A. Nam
James L. McClelland
16
1
0
10 Jul 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,848
0
18 Apr 2021
BERT & Family Eat Word Salad: Experiments with Text Understanding
BERT & Family Eat Word Salad: Experiments with Text Understanding
Ashim Gupta
Giorgi Kvernadze
Vivek Srikumar
208
73
0
10 Jan 2021
Previous
123456
Next