A Tri-Partite Neural Document Language Model for Semantic Information Retrieval - 2018 ESWC-Conferences(About) from the abstract: Previous work in information retrieval have shown that using evidence, such as concepts and relations, from external knowledge sources could enhance the retrieval performance... This paper presents a new tri-partite neural document language framework that leverages explicit knowledge to jointly constrain word, concept, and document learning representations to tackle a number of issues including polysemy and granularity mismatch.
Knowledge Graph and Text Jointly Embedding (2014) Zhen Wang yx...(About) method of jointly embedding knowledge graphs and a text corpus so that entities and words/phrases are represented in the same vector space.
Promising improvement in the accuracy of predicting facts, compared to separately embedding knowledge graphs and text (in particular, enables the prediction of facts containing entities out of the knowledge graph)
[cité par J. Moreno](/doc/?uri=https%3A%2F%2Fhal.archives-ouvertes.fr%2Fhal-01626196%2Fdocument)
Combining word and entity embeddings for entity linking (ESWC 2017)(About) The general approach for the entity linking task is to generate, for a given mention, a set of candidate entities from the base and, in a second step, determine which is the best
one. This paper proposes a novel method for the second step which is
based on the **joint learning of embeddings for the words in the text and
the entities in the knowledge base**.
Enriching Word Embeddings Using Knowledge Graph for Semantic Tagging in Conversational Dialog Systems - Microsoft Research(About) > new simple, yet effective approaches to
learn domain specific word embeddings.
> Adapting word embeddings, such as jointly capturing
syntactic and semantic information, can further enrich semantic
word representations for several tasks, e.g., sentiment
analysis (Tang et al. 2014), named entity recognition
(Lebret, Legrand, and Collobert 2013), entity-relation extraction
(Weston et al. 2013), etc. (Yu and Dredze 2014)
has introduced a lightly supervised word embedding learning
extending word2vec. They incorporate prior information to the objective
function as a regularization term considering synonymy relations
between words from Wordnet (Fellbaum 1999).
> In this work, we go one step further and investigate if
enriching the word2vec word embeddings trained on unstructured/
unlabeled text with domain specific semantic relations
obtained from knowledge sources (e.g., knowledge
graphs, search query logs, etc.) can help to discover relation
aware word embeddings. Unlike earlier work, **we encode the
information about the relations between phrases, thereby,
entities and relation mentions are all embedded into a lowdimensional
## Related work (Learning Word Embeddings with Priors)
- Relational Constrained Model (RTM) (Yu and Dredze 2014)
While CBOW learns lexical word embeddings from provided text, the RTM learns embeddings of words based on their similarity to other words provided by a knowledge resource (eg. wordnet)
- Joint model (Yu and Dredze 2014)
combines CBOW and RTM through linear combination