Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2109.04144
Cited By
Avoiding Inference Heuristics in Few-shot Prompt-based Finetuning
9 September 2021
Prasetya Ajie Utama
N. Moosavi
Victor Sanh
Iryna Gurevych
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Avoiding Inference Heuristics in Few-shot Prompt-based Finetuning"
50 / 51 papers shown
Title
Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence
Tal Schuster
Adam Fisch
Regina Barzilay
77
234
0
15 Mar 2021
How Many Data Points is a Prompt Worth?
Teven Le Scao
Alexander M. Rush
VLM
145
302
0
15 Mar 2021
Towards Interpreting and Mitigating Shortcut Learning Behavior of NLU Models
Mengnan Du
Varun Manjunatha
R. Jain
Ruchi Deshpande
Franck Dernoncourt
Jiuxiang Gu
Tong Sun
Xia Hu
85
108
0
11 Mar 2021
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
375
1,966
0
31 Dec 2020
Learning from others' mistakes: Avoiding dataset biases without modeling them
Victor Sanh
Thomas Wolf
Yonatan Belinkov
Alexander M. Rush
52
116
0
02 Dec 2020
Learning to Model and Ignore Dataset Bias with Mixed Capacity Ensembles
Christopher Clark
Mark Yatskar
Luke Zettlemoyer
57
62
0
07 Nov 2020
Automatically Identifying Words That Can Serve as Labels for Few-Shot Text Classification
Timo Schick
Helmut Schmid
Hinrich Schütze
VLM
75
208
0
26 Oct 2020
Improving Robustness by Augmenting Training Sentences with Predicate-Argument Structures
N. Moosavi
M. Boer
Prasetya Ajie Utama
Iryna Gurevych
59
13
0
23 Oct 2020
Characterising Bias in Compressed Models
Sara Hooker
Nyalleng Moorosi
Gregory Clark
Samy Bengio
Emily L. Denton
64
185
0
06 Oct 2020
Towards Debiasing NLU Models from Unknown Biases
Prasetya Ajie Utama
N. Moosavi
Iryna Gurevych
57
155
0
25 Sep 2020
It's Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners
Timo Schick
Hinrich Schütze
117
972
0
15 Sep 2020
On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselines
Marius Mosbach
Maksym Andriushchenko
Dietrich Klakow
158
357
0
08 Jun 2020
Mind the Trade-off: Debiasing NLU Models without Degrading the In-distribution Performance
Prasetya Ajie Utama
N. Moosavi
Iryna Gurevych
OODD
114
127
0
01 May 2020
Recall and Learn: Fine-tuning Deep Pretrained Language Models with Less Forgetting
Sanyuan Chen
Yutai Hou
Yiming Cui
Wanxiang Che
Ting Liu
Xiangzhan Yu
KELM
CLL
97
224
0
27 Apr 2020
Syntactic Data Augmentation Increases Robustness to Inference Heuristics
Junghyun Min
R. Thomas McCoy
Dipanjan Das
Emily Pitler
Tal Linzen
70
179
0
24 Apr 2020
Don't Stop Pretraining: Adapt Language Models to Domains and Tasks
Suchin Gururangan
Ana Marasović
Swabha Swayamdipta
Kyle Lo
Iz Beltagy
Doug Downey
Noah A. Smith
VLM
AI4CE
CLL
152
2,424
0
23 Apr 2020
The Right Tool for the Job: Matching Model and Instance Complexities
Roy Schwartz
Gabriel Stanovsky
Swabha Swayamdipta
Jesse Dodge
Noah A. Smith
93
169
0
16 Apr 2020
Fine-Tuning Pretrained Language Models: Weight Initializations, Data Orders, and Early Stopping
Jesse Dodge
Gabriel Ilharco
Roy Schwartz
Ali Farhadi
Hannaneh Hajishirzi
Noah A. Smith
95
595
0
15 Feb 2020
Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference
Timo Schick
Hinrich Schütze
344
1,613
0
21 Jan 2020
Adversarial NLI: A New Benchmark for Natural Language Understanding
Yixin Nie
Adina Williams
Emily Dinan
Joey Tianyi Zhou
Jason Weston
Douwe Kiela
118
1,005
0
31 Oct 2019
Diversify Your Datasets: Analyzing Generalization via Controlled Variance in Adversarial Datasets
Ohad Rozen
Vered Shwartz
Roee Aharoni
Ido Dagan
AAML
72
38
0
21 Oct 2019
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Victor Sanh
Lysandre Debut
Julien Chaumond
Thomas Wolf
230
7,504
0
02 Oct 2019
Learning the Difference that Makes a Difference with Counterfactually-Augmented Data
Divyansh Kaushik
Eduard H. Hovy
Zachary Chase Lipton
CML
86
569
0
26 Sep 2019
End-to-End Bias Mitigation by Modelling Biases in Corpora
Rabeeh Karimi Mahabadi
Yonatan Belinkov
James Henderson
117
180
0
13 Sep 2019
Unlearn Dataset Bias in Natural Language Inference by Fitting the Residual
He He
Sheng Zha
Haohan Wang
60
199
0
28 Aug 2019
Towards Debiasing Fact Verification Models
Tal Schuster
Darsh J. Shah
Yun Jie Serene Yeo
Daniel Filizzola
Enrico Santus
Regina Barzilay
87
211
0
14 Aug 2019
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Yinhan Liu
Myle Ott
Naman Goyal
Jingfei Du
Mandar Joshi
Danqi Chen
Omer Levy
M. Lewis
Luke Zettlemoyer
Veselin Stoyanov
AIMat
615
24,431
0
26 Jul 2019
Green AI
Roy Schwartz
Jesse Dodge
Noah A. Smith
Oren Etzioni
102
1,144
0
22 Jul 2019
Probing Neural Network Comprehension of Natural Language Arguments
Timothy Niven
Hung-Yu kao
AAML
85
454
0
17 Jul 2019
HellaSwag: Can a Machine Really Finish Your Sentence?
Rowan Zellers
Ari Holtzman
Yonatan Bisk
Ali Farhadi
Yejin Choi
168
2,468
0
19 May 2019
BERT Rediscovers the Classical NLP Pipeline
Ian Tenney
Dipanjan Das
Ellie Pavlick
MILM
SSeg
133
1,471
0
15 May 2019
Inoculation by Fine-Tuning: A Method for Analyzing Challenge Datasets
Nelson F. Liu
Roy Schwartz
Noah A. Smith
AAML
68
106
0
04 Apr 2019
PAWS: Paraphrase Adversaries from Word Scrambling
Yuan Zhang
Jason Baldridge
Luheng He
71
543
0
01 Apr 2019
Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference
R. Thomas McCoy
Ellie Pavlick
Tal Linzen
129
1,237
0
04 Feb 2019
Analyzing Compositionality-Sensitivity of NLI Models
Yixin Nie
Yicheng Wang
Joey Tianyi Zhou
CoGe
52
82
0
16 Nov 2018
How Much Reading Does Reading Comprehension Require? A Critical Investigation of Popular Benchmarks
Divyansh Kaushik
Zachary Chase Lipton
ELM
69
232
0
14 Aug 2018
Stress Test Evaluation for Natural Language Inference
Aakanksha Naik
Abhilasha Ravichander
Norman M. Sadeh
Carolyn Rose
Graham Neubig
ELM
67
376
0
02 Jun 2018
Breaking NLI Systems with Sentences that Require Simple Lexical Inferences
Max Glockner
Vered Shwartz
Yoav Goldberg
NAI
72
366
0
06 May 2018
Hypothesis Only Baselines in Natural Language Inference
Adam Poliak
Jason Naradowsky
Aparajita Haldar
Rachel Rudinger
Benjamin Van Durme
229
579
0
02 May 2018
Performance Impact Caused by Hidden Bias of Training Data for Recognizing Textual Entailment
Masatoshi Tsuchiya
56
160
0
22 Apr 2018
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
1.1K
7,154
0
20 Apr 2018
Evaluating Compositionality in Sentence Embeddings
Ishita Dasgupta
Demi Guo
Andreas Stuhlmuller
S. Gershman
Noah D. Goodman
CoGe
61
121
0
12 Feb 2018
Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference
Benoit Jacob
S. Kligys
Bo Chen
Menglong Zhu
Matthew Tang
Andrew G. Howard
Hartwig Adam
Dmitry Kalenichenko
MQ
150
3,123
0
15 Dec 2017
Adversarial Examples for Evaluating Reading Comprehension Systems
Robin Jia
Percy Liang
AAML
ELM
196
1,605
0
23 Jul 2017
A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference
Adina Williams
Nikita Nangia
Samuel R. Bowman
520
4,476
0
18 Apr 2017
The Effect of Different Writing Tasks on Linguistic Style: A Case Study of the ROC Story Cloze Task
Roy Schwartz
Maarten Sap
Ioannis Konstas
Leila Zilles
Yejin Choi
Noah A. Smith
76
120
0
07 Feb 2017
Overcoming catastrophic forgetting in neural networks
J. Kirkpatrick
Razvan Pascanu
Neil C. Rabinowitz
J. Veness
Guillaume Desjardins
...
A. Grabska-Barwinska
Demis Hassabis
Claudia Clopath
D. Kumaran
R. Hadsell
CLL
354
7,504
0
02 Dec 2016
A large annotated corpus for learning natural language inference
Samuel R. Bowman
Gabor Angeli
Christopher Potts
Christopher D. Manning
310
4,284
0
21 Aug 2015
Learning both Weights and Connections for Efficient Neural Networks
Song Han
Jeff Pool
J. Tran
W. Dally
CVBM
310
6,672
0
08 Jun 2015
Distilling the Knowledge in a Neural Network
Geoffrey E. Hinton
Oriol Vinyals
J. Dean
FedML
344
19,643
0
09 Mar 2015
1
2
Next