Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2104.07094
Cited By
Static Embeddings as Efficient Knowledge Bases?
14 April 2021
Philipp Dufter
Nora Kassner
Hinrich Schütze
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Static Embeddings as Efficient Knowledge Bases?"
29 / 29 papers shown
Title
Measuring and Improving Consistency in Pretrained Language Models
Yanai Elazar
Nora Kassner
Shauli Ravfogel
Abhilasha Ravichander
Eduard H. Hovy
Hinrich Schütze
Yoav Goldberg
HILM
318
367
0
01 Feb 2021
Multilingual LAMA: Investigating Knowledge in Multilingual Pretrained Language Models
Nora Kassner
Philipp Dufter
Hinrich Schütze
69
141
0
01 Feb 2021
When Do You Need Billions of Words of Pretraining Data?
Yian Zhang
Alex Warstadt
Haau-Sing Li
Samuel R. Bowman
51
141
0
10 Nov 2020
JAKET: Joint Pre-training of Knowledge Graph and Language Understanding
Donghan Yu
Chenguang Zhu
Yiming Yang
Michael Zeng
KELM
45
144
0
02 Oct 2020
Language Models as Knowledge Bases: On Entity Representations, Storage Capacity, and Paraphrased Queries
Benjamin Heinzerling
Kentaro Inui
KELM
55
130
0
20 Aug 2020
Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering
Gautier Izacard
Edouard Grave
RALM
117
1,170
0
02 Jul 2020
Pre-training via Paraphrasing
M. Lewis
Marjan Ghazvininejad
Gargi Ghosh
Armen Aghajanyan
Sida I. Wang
Luke Zettlemoyer
AIMat
80
160
0
26 Jun 2020
Language Models are Few-Shot Learners
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
...
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
BDL
736
41,894
0
28 May 2020
How Context Affects Language Models' Factual Predictions
Fabio Petroni
Patrick Lewis
Aleksandra Piktus
Tim Rocktaschel
Yuxiang Wu
Alexander H. Miller
Sebastian Riedel
KELM
49
239
0
10 May 2020
BERT-kNN: Adding a kNN Search Component to Pretrained Language Models for Better QA
Nora Kassner
Hinrich Schütze
RALM
76
68
0
02 May 2020
How Much Knowledge Can You Pack Into the Parameters of a Language Model?
Adam Roberts
Colin Raffel
Noam M. Shazeer
KELM
106
890
0
10 Feb 2020
REALM: Retrieval-Augmented Language Model Pre-Training
Kelvin Guu
Kenton Lee
Zora Tung
Panupong Pasupat
Ming-Wei Chang
RALM
115
2,093
0
10 Feb 2020
K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters
Ruize Wang
Duyu Tang
Nan Duan
Zhongyu Wei
Xuanjing Huang
Jianshu Ji
Guihong Cao
Daxin Jiang
Ming Zhou
KELM
87
553
0
05 Feb 2020
Pretrained Encyclopedia: Weakly Supervised Knowledge-Pretrained Language Model
Wenhan Xiong
Jingfei Du
William Yang Wang
Veselin Stoyanov
SSL
KELM
87
201
0
20 Dec 2019
Inducing Relational Knowledge from BERT
Zied Bouraoui
Jose Camacho-Collados
Steven Schockaert
83
167
0
28 Nov 2019
How Can We Know What Language Models Know?
Zhengbao Jiang
Frank F. Xu
Jun Araki
Graham Neubig
KELM
130
1,403
0
28 Nov 2019
E-BERT: Efficient-Yet-Effective Entity Embeddings for BERT
Nina Poerner
Ulli Waltinger
Hinrich Schütze
93
160
0
09 Nov 2019
Negated and Misprimed Probes for Pretrained Language Models: Birds Can Talk, But Cannot Fly
Nora Kassner
Hinrich Schütze
68
323
0
08 Nov 2019
K-BERT: Enabling Language Representation with Knowledge Graph
Weijie Liu
Peng Zhou
Zhe Zhao
Zhiruo Wang
Qi Ju
Haotang Deng
Ping Wang
299
789
0
17 Sep 2019
Knowledge Enhanced Contextual Word Representations
Matthew E. Peters
Mark Neumann
IV RobertL.Logan
Roy Schwartz
Vidur Joshi
Sameer Singh
Noah A. Smith
277
658
0
09 Sep 2019
Language Models as Knowledge Bases?
Fabio Petroni
Tim Rocktaschel
Patrick Lewis
A. Bakhtin
Yuxiang Wu
Alexander H. Miller
Sebastian Riedel
KELM
AI4MH
566
2,664
0
03 Sep 2019
What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models
Allyson Ettinger
81
604
0
31 Jul 2019
COMET: Commonsense Transformers for Automatic Knowledge Graph Construction
Antoine Bosselut
Hannah Rashkin
Maarten Sap
Chaitanya Malaviya
Asli Celikyilmaz
Yejin Choi
82
910
0
12 Jun 2019
Energy and Policy Considerations for Deep Learning in NLP
Emma Strubell
Ananya Ganesh
Andrew McCallum
62
2,651
0
05 Jun 2019
ERNIE: Enhanced Language Representation with Informative Entities
Zhengyan Zhang
Xu Han
Zhiyuan Liu
Xin Jiang
Maosong Sun
Qun Liu
92
1,397
0
17 May 2019
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
VLM
SSL
SSeg
1.7K
94,729
0
11 Oct 2018
Deep contextualized word representations
Matthew E. Peters
Mark Neumann
Mohit Iyyer
Matt Gardner
Christopher Clark
Kenton Lee
Luke Zettlemoyer
NAI
204
11,546
0
15 Feb 2018
Corpus-level Fine-grained Entity Typing
Yadollah Yaghoobzadeh
Heike Adel
Hinrich Schütze
42
33
0
07 Aug 2017
Enriching Word Vectors with Subword Information
Piotr Bojanowski
Edouard Grave
Armand Joulin
Tomas Mikolov
NAI
SSL
VLM
220
9,958
0
15 Jul 2016
1