Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2102.01017
Cited By
Measuring and Improving Consistency in Pretrained Language Models
1 February 2021
Yanai Elazar
Nora Kassner
Shauli Ravfogel
Abhilasha Ravichander
Eduard H. Hovy
Hinrich Schütze
Yoav Goldberg
HILM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Measuring and Improving Consistency in Pretrained Language Models"
18 / 68 papers shown
Title
Lost in Context? On the Sense-wise Variance of Contextualized Word Embeddings
Yile Wang
Yue Zhang
19
4
0
20 Aug 2022
Maieutic Prompting: Logically Consistent Reasoning with Recursive Explanations
Jaehun Jung
Lianhui Qin
Sean Welleck
Faeze Brahman
Chandra Bhagavatula
Ronan Le Bras
Yejin Choi
ReLM
LRM
226
190
0
24 May 2022
TempLM: Distilling Language Models into Template-Based Generators
Tianyi Zhang
Mina Lee
Lisa Li
Ende Shen
Tatsunori B. Hashimoto
VLM
40
5
0
23 May 2022
Beyond Distributional Hypothesis: Let Language Models Learn Meaning-Text Correspondence
Myeongjun Jang
Frank Mtumbuka
Thomas Lukasiewicz
28
9
0
08 May 2022
Language Models in the Loop: Incorporating Prompting into Weak Supervision
Ryan Smith
Jason Alan Fries
Braden Hancock
Stephen H. Bach
50
53
0
04 May 2022
Prompt Consistency for Zero-Shot Task Generalization
Chunting Zhou
Junxian He
Xuezhe Ma
Taylor Berg-Kirkpatrick
Graham Neubig
VLM
18
74
0
29 Apr 2022
How Pre-trained Language Models Capture Factual Knowledge? A Causal-Inspired Analysis
Shaobo Li
Xiaoguang Li
Lifeng Shang
Zhenhua Dong
Chengjie Sun
Bingquan Liu
Zhenzhou Ji
Xin Jiang
Qun Liu
KELM
28
53
0
31 Mar 2022
Factual Consistency of Multilingual Pretrained Language Models
Constanza Fierro
Anders Søgaard
HILM
24
15
0
22 Mar 2022
Locating and Editing Factual Associations in GPT
Kevin Meng
David Bau
A. Andonian
Yonatan Belinkov
KELM
56
1,192
0
10 Feb 2022
Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs
Peter Hase
Mona T. Diab
Asli Celikyilmaz
Xian Li
Zornitsa Kozareva
Veselin Stoyanov
Joey Tianyi Zhou
Srini Iyer
KELM
LRM
24
79
0
26 Nov 2021
BeliefBank: Adding Memory to a Pre-Trained Language Model for a Systematic Notion of Belief
Nora Kassner
Oyvind Tafjord
Hinrich Schütze
Peter Clark
KELM
LRM
245
64
0
29 Sep 2021
Knowledge Neurons in Pretrained Transformers
Damai Dai
Li Dong
Y. Hao
Zhifang Sui
Baobao Chang
Furu Wei
KELM
MU
14
417
0
18 Apr 2021
Back to Square One: Artifact Detection, Training and Commonsense Disentanglement in the Winograd Schema
Yanai Elazar
Hongming Zhang
Yoav Goldberg
Dan Roth
ReLM
LRM
42
44
0
16 Apr 2021
Relational World Knowledge Representation in Contextual Language Models: A Review
Tara Safavi
Danai Koutra
KELM
35
51
0
12 Apr 2021
How Can We Accelerate Progress Towards Human-like Linguistic Generalization?
Tal Linzen
220
188
0
03 May 2020
Knowledge Enhanced Contextual Word Representations
Matthew E. Peters
Mark Neumann
IV RobertL.Logan
Roy Schwartz
Vidur Joshi
Sameer Singh
Noah A. Smith
231
656
0
09 Sep 2019
Language Models as Knowledge Bases?
Fabio Petroni
Tim Rocktaschel
Patrick Lewis
A. Bakhtin
Yuxiang Wu
Alexander H. Miller
Sebastian Riedel
KELM
AI4MH
415
2,588
0
03 Sep 2019
e-SNLI: Natural Language Inference with Natural Language Explanations
Oana-Maria Camburu
Tim Rocktaschel
Thomas Lukasiewicz
Phil Blunsom
LRM
260
620
0
04 Dec 2018
Previous
1
2