ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.13486
  4. Cited By
Mind the instructions: a holistic evaluation of consistency and
  interactions in prompt-based learning

Mind the instructions: a holistic evaluation of consistency and interactions in prompt-based learning

20 October 2023
Lucas Weber
Elia Bruni
Dieuwke Hupkes
ArXiv (abs)PDFHTML

Papers citing "Mind the instructions: a holistic evaluation of consistency and interactions in prompt-based learning"

21 / 21 papers shown
Title
On the Consistency of Multilingual Context Utilization in Retrieval-Augmented Generation
On the Consistency of Multilingual Context Utilization in Retrieval-Augmented Generation
Jirui Qi
Raquel Fernández
Arianna Bisazza
RALM
115
0
0
01 Apr 2025
LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale
LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale
Tim Dettmers
M. Lewis
Younes Belkada
Luke Zettlemoyer
MQ
98
653
0
15 Aug 2022
What Can Transformers Learn In-Context? A Case Study of Simple Function
  Classes
What Can Transformers Learn In-Context? A Case Study of Simple Function Classes
Shivam Garg
Dimitris Tsipras
Percy Liang
Gregory Valiant
139
506
0
01 Aug 2022
PromptSource: An Integrated Development Environment and Repository for
  Natural Language Prompts
PromptSource: An Integrated Development Environment and Repository for Natural Language Prompts
Stephen H. Bach
Victor Sanh
Zheng-Xin Yong
Albert Webson
Colin Raffel
...
Khalid Almubarak
Xiangru Tang
Dragomir R. Radev
Mike Tian-Jian Jiang
Alexander M. Rush
VLM
322
348
0
02 Feb 2022
Prompt Waywardness: The Curious Case of Discretized Interpretation of
  Continuous Prompts
Prompt Waywardness: The Curious Case of Discretized Interpretation of Continuous Prompts
Daniel Khashabi
Xinxi Lyu
Sewon Min
Lianhui Qin
Kyle Richardson
...
Hannaneh Hajishirzi
Tushar Khot
Ashish Sabharwal
Sameer Singh
Yejin Choi
87
75
0
15 Dec 2021
Finetuned Language Models Are Zero-Shot Learners
Finetuned Language Models Are Zero-Shot Learners
Jason W. Wei
Maarten Bosma
Vincent Zhao
Kelvin Guu
Adams Wei Yu
Brian Lester
Nan Du
Andrew M. Dai
Quoc V. Le
ALMUQCV
201
3,750
0
03 Sep 2021
Adapting Language Models for Zero-shot Learning by Meta-tuning on
  Dataset and Prompt Collections
Adapting Language Models for Zero-shot Learning by Meta-tuning on Dataset and Prompt Collections
Ruiqi Zhong
Kristy Lee
Zheng Zhang
Dan Klein
97
173
0
10 Apr 2021
The Effect of Natural Distribution Shift on Question Answering Models
The Effect of Natural Distribution Shift on Question Answering Models
John Miller
K. Krauth
Benjamin Recht
Ludwig Schmidt
OOD
93
145
0
29 Apr 2020
Shortcut Learning in Deep Neural Networks
Shortcut Learning in Deep Neural Networks
Robert Geirhos
J. Jacobsen
Claudio Michaelis
R. Zemel
Wieland Brendel
Matthias Bethge
Felix Wichmann
206
2,052
0
16 Apr 2020
Pretrained Transformers Improve Out-of-Distribution Robustness
Pretrained Transformers Improve Out-of-Distribution Robustness
Dan Hendrycks
Xiaoyuan Liu
Eric Wallace
Adam Dziedzic
R. Krishnan
Basel Alomair
OOD
191
434
0
13 Apr 2020
Adversarial NLI: A New Benchmark for Natural Language Understanding
Adversarial NLI: A New Benchmark for Natural Language Understanding
Yixin Nie
Adina Williams
Emily Dinan
Joey Tianyi Zhou
Jason Weston
Douwe Kiela
125
1,006
0
31 Oct 2019
Learning the Difference that Makes a Difference with
  Counterfactually-Augmented Data
Learning the Difference that Makes a Difference with Counterfactually-Augmented Data
Divyansh Kaushik
Eduard H. Hovy
Zachary Chase Lipton
CML
91
569
0
26 Sep 2019
Are We Modeling the Task or the Annotator? An Investigation of Annotator
  Bias in Natural Language Understanding Datasets
Are We Modeling the Task or the Annotator? An Investigation of Annotator Bias in Natural Language Understanding Datasets
Mor Geva
Yoav Goldberg
Jonathan Berant
320
326
0
21 Aug 2019
RoBERTa: A Robustly Optimized BERT Pretraining Approach
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Yinhan Liu
Myle Ott
Naman Goyal
Jingfei Du
Mandar Joshi
Danqi Chen
Omer Levy
M. Lewis
Luke Zettlemoyer
Veselin Stoyanov
AIMat
659
24,464
0
26 Jul 2019
PAWS: Paraphrase Adversaries from Word Scrambling
PAWS: Paraphrase Adversaries from Word Scrambling
Yuan Zhang
Jason Baldridge
Luheng He
73
543
0
01 Apr 2019
Using Pre-Training Can Improve Model Robustness and Uncertainty
Using Pre-Training Can Improve Model Robustness and Uncertainty
Dan Hendrycks
Kimin Lee
Mantas Mazeika
NoLa
73
725
0
28 Jan 2019
Hypothesis Only Baselines in Natural Language Inference
Hypothesis Only Baselines in Natural Language Inference
Adam Poliak
Jason Naradowsky
Aparajita Haldar
Rachel Rudinger
Benjamin Van Durme
234
579
0
02 May 2018
Annotation Artifacts in Natural Language Inference Data
Annotation Artifacts in Natural Language Inference Data
Suchin Gururangan
Swabha Swayamdipta
Omer Levy
Roy Schwartz
Samuel R. Bowman
Noah A. Smith
150
1,176
0
06 Mar 2018
A Broad-Coverage Challenge Corpus for Sentence Understanding through
  Inference
A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference
Adina Williams
Nikita Nangia
Samuel R. Bowman
524
4,479
0
18 Apr 2017
SQuAD: 100,000+ Questions for Machine Comprehension of Text
SQuAD: 100,000+ Questions for Machine Comprehension of Text
Pranav Rajpurkar
Jian Zhang
Konstantin Lopyrev
Percy Liang
RALM
283
8,134
0
16 Jun 2016
On Causal and Anticausal Learning
On Causal and Anticausal Learning
Bernhard Schölkopf
Dominik Janzing
J. Peters
Eleni Sgouritsa
Kun Zhang
Joris Mooij
CML
88
609
0
27 Jun 2012
1