Embedding Entities and Relations for Learning and Inference in Knowledge Bases
- NAI

We consider learning representations of entities and relations in KBs using the neural-embedding approach. We show that most existing models, including NTN and TransE, can be generalized under a unified learning framework, where entities are low-dimensional vectors learned from a neural network and relations are bilinear and/or linear mapping functions. Under this framework, we compare a variety of embedding models on the link prediction task. We show that a simple bilinear formulation achieves new state-of-the-art results for the task (achieving a top-10 accuracy of 73.2% vs. 54.7% by TransE when evaluated on Freebase). Furthermore, we introduce a novel approach that utilizes the learned relation embeddings to mine logical rules from the KB. We demonstrate that embeddings trained from the bilinear objective can effectively capture relation composition via matrix multiplication. We also show that our embedding-based approach can extract rules that involve relation transitivity more effectively than a state-of-the-art rule mining approach that is tailored for large-scale KBs.
View on arXiv