About This Document
- sl:arxiv_author :
- sl:arxiv_firstAuthor : Tommaso Soru
- sl:arxiv_num : 1803.07828
- sl:arxiv_published : 2018-03-21T10:06:28Z
- sl:arxiv_summary : Knowledge Graph Embedding methods aim at representing entities and relations
in a knowledge base as points or vectors in a continuous vector space. Several
approaches using embeddings have shown promising results on tasks such as link
prediction, entity recommendation, question answering, and triplet
classification. However, only a few methods can compute low-dimensional
embeddings of very large knowledge bases without needing state-of-the-art
computational resources. In this paper, we propose KG2Vec, a simple and fast
approach to Knowledge Graph Embedding based on the skip-gram model. Instead of
using a predefined scoring function, we learn it relying on Long Short-Term
Memories. We show that our embeddings achieve results comparable with the most
scalable approaches on knowledge graph completion as well as on a new metric.
Yet, KG2Vec can embed large graphs in lesser time by processing more than 250
million triples in less than 7 hours on common hardware.@en
- sl:arxiv_title : Expeditious Generation of Knowledge Graph Embeddings@en
- sl:arxiv_updated : 2018-11-09T14:26:16Z
- sl:bookmarkOf : https://arxiv.org/abs/1803.07828
- sl:creationDate : 2020-09-02
- sl:creationTime : 2020-09-02T16:57:44Z
Documents with similar tags (experimental)