ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1911.03681
  4. Cited By
E-BERT: Efficient-Yet-Effective Entity Embeddings for BERT
v1v2 (latest)

E-BERT: Efficient-Yet-Effective Entity Embeddings for BERT

9 November 2019
Nina Poerner
Ulli Waltinger
Hinrich Schütze
ArXiv (abs)PDFHTML

Papers citing "E-BERT: Efficient-Yet-Effective Entity Embeddings for BERT"

29 / 29 papers shown
Title
Fact Recall, Heuristics or Pure Guesswork? Precise Interpretations of Language Models for Fact Completion
Fact Recall, Heuristics or Pure Guesswork? Precise Interpretations of Language Models for Fact Completion
Denitsa Saynova
Lovisa Hagström
Moa Johansson
Richard Johansson
Marco Kuhlmann
HILM
107
1
0
18 Oct 2024
Recall Them All: Retrieval-Augmented Language Models for Long Object List Extraction from Long Documents
Recall Them All: Retrieval-Augmented Language Models for Long Object List Extraction from Long Documents
Sneha Singhania
Simon Razniewski
Gerhard Weikum
RALM
99
1
0
04 May 2024
On the Evolution of Knowledge Graphs: A Survey and Perspective
On the Evolution of Knowledge Graphs: A Survey and Perspective
Xuhui Jiang
Chengjin Xu
Yinghan Shen
Xun Sun
Lumingyuan Tang
Saizhuo Wang
Zhongwu Chen
Yuanzhuo Wang
Jian Guo
115
9
0
07 Oct 2023
Investigating Entity Knowledge in BERT with Simple Neural End-To-End
  Entity Linking
Investigating Entity Knowledge in BERT with Simple Neural End-To-End Entity Linking
Samuel Broscheit
OCL
78
122
0
11 Mar 2020
K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters
K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters
Ruize Wang
Duyu Tang
Nan Duan
Zhongyu Wei
Xuanjing Huang
Jianshu Ji
Guihong Cao
Daxin Jiang
Ming Zhou
KELM
116
553
0
05 Feb 2020
How Can We Know What Language Models Know?
How Can We Know What Language Models Know?
Zhengbao Jiang
Frank F. Xu
Jun Araki
Graham Neubig
KELM
132
1,405
0
28 Nov 2019
KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language
  Representation
KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation
Xiaozhi Wang
Tianyu Gao
Zhaocheng Zhu
Zhengyan Zhang
Zhiyuan Liu
Juan-Zi Li
Jian Tang
111
665
0
13 Nov 2019
Contextualized End-to-End Neural Entity Linking
Contextualized End-to-End Neural Entity Linking
Haotian Chen
Andrej Zukov Gregoric
Xi David Li
Sahil Wadhwa
54
4
0
10 Nov 2019
BERTRAM: Improved Word Embeddings Have Big Impact on Contextualized
  Model Performance
BERTRAM: Improved Word Embeddings Have Big Impact on Contextualized Model Performance
Timo Schick
Hinrich Schütze
51
50
0
16 Oct 2019
Improving Pre-Trained Multilingual Models with Vocabulary Expansion
Improving Pre-Trained Multilingual Models with Vocabulary Expansion
Hai Wang
Dian Yu
Kai Sun
Jianshu Chen
Dong Yu
56
44
0
26 Sep 2019
Knowledge Enhanced Contextual Word Representations
Knowledge Enhanced Contextual Word Representations
Matthew E. Peters
Mark Neumann
IV RobertL.Logan
Roy Schwartz
Vidur Joshi
Sameer Singh
Noah A. Smith
279
659
0
09 Sep 2019
Show Your Work: Improved Reporting of Experimental Results
Show Your Work: Improved Reporting of Experimental Results
Jesse Dodge
Suchin Gururangan
Dallas Card
Roy Schwartz
Noah A. Smith
72
255
0
06 Sep 2019
Language Models as Knowledge Bases?
Language Models as Knowledge Bases?
Fabio Petroni
Tim Rocktaschel
Patrick Lewis
A. Bakhtin
Yuxiang Wu
Alexander H. Miller
Sebastian Riedel
KELMAI4MH
576
2,670
0
03 Sep 2019
Commonsense Knowledge Mining from Pretrained Models
Commonsense Knowledge Mining from Pretrained Models
Joshua Feldman
Joe Davison
Alexander M. Rush
SSL
88
331
0
02 Sep 2019
StructBERT: Incorporating Language Structures into Pre-training for Deep
  Language Understanding
StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding
Wei Wang
Bin Bi
Ming Yan
Chen Henry Wu
Zuyi Bao
Jiangnan Xia
Liwei Peng
Luo Si
59
262
0
13 Aug 2019
RoBERTa: A Robustly Optimized BERT Pretraining Approach
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Yinhan Liu
Myle Ott
Naman Goyal
Jingfei Du
Mandar Joshi
Danqi Chen
Omer Levy
M. Lewis
Luke Zettlemoyer
Veselin Stoyanov
AIMat
665
24,464
0
26 Jul 2019
XLNet: Generalized Autoregressive Pretraining for Language Understanding
XLNet: Generalized Autoregressive Pretraining for Language Understanding
Zhilin Yang
Zihang Dai
Yiming Yang
J. Carbonell
Ruslan Salakhutdinov
Quoc V. Le
AI4CE
232
8,433
0
19 Jun 2019
ERNIE: Enhanced Language Representation with Informative Entities
ERNIE: Enhanced Language Representation with Informative Entities
Zhengyan Zhang
Xu Han
Zhiyuan Liu
Xin Jiang
Maosong Sun
Qun Liu
109
1,397
0
17 May 2019
BERT: Pre-training of Deep Bidirectional Transformers for Language
  Understanding
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
VLMSSLSSeg
1.8K
94,891
0
11 Oct 2018
Open Domain Question Answering Using Early Fusion of Knowledge Bases and
  Text
Open Domain Question Answering Using Early Fusion of Knowledge Bases and Text
Haitian Sun
Bhuwan Dhingra
Manzil Zaheer
Kathryn Mazaitis
Ruslan Salakhutdinov
William W. Cohen
80
417
0
04 Sep 2018
End-to-End Neural Entity Linking
End-to-End Neural Entity Linking
N. Kolitsas
O. Ganea
Thomas Hofmann
65
263
0
23 Aug 2018
Decoupled Weight Decay Regularization
Decoupled Weight Decay Regularization
I. Loshchilov
Frank Hutter
OffRL
146
2,142
0
14 Nov 2017
Attention Is All You Need
Attention Is All You Need
Ashish Vaswani
Noam M. Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan Gomez
Lukasz Kaiser
Illia Polosukhin
3DV
713
132,199
0
12 Jun 2017
Question Answering on Knowledge Bases and Text using Universal Schema
  and Memory Networks
Question Answering on Knowledge Bases and Text using Universal Schema and Memory Networks
Rajarshi Das
Manzil Zaheer
Siva Reddy
Andrew McCallum
105
140
0
27 Apr 2017
Offline bilingual word vectors, orthogonal transformations and the
  inverted softmax
Offline bilingual word vectors, orthogonal transformations and the inverted softmax
Samuel L. Smith
David H. P. Turban
Steven Hamblin
Nils Y. Hammerla
OffRL
66
536
0
13 Feb 2017
Joint Learning of the Embedding of Words and Entities for Named Entity
  Disambiguation
Joint Learning of the Embedding of Words and Entities for Named Entity Disambiguation
Ikuya Yamada
Hiroyuki Shindo
Hideaki Takeda
Yoshiyasu Takefuji
191
320
0
06 Jan 2016
Adam: A Method for Stochastic Optimization
Adam: A Method for Stochastic Optimization
Diederik P. Kingma
Jimmy Ba
ODL
1.9K
150,115
0
22 Dec 2014
Exploiting Similarities among Languages for Machine Translation
Exploiting Similarities among Languages for Machine Translation
Tomas Mikolov
Quoc V. Le
Ilya Sutskever
96
1,597
0
17 Sep 2013
Efficient Estimation of Word Representations in Vector Space
Efficient Estimation of Word Representations in Vector Space
Tomas Mikolov
Kai Chen
G. Corrado
J. Dean
3DV
680
31,512
0
16 Jan 2013
1