- [Deghani2017] Mostafa Dehghani, Hamed Zamani, Aliaksei Severyn, Jaap
Kamps, W. Bruce Croft. Neural Ranking Models with Weak
Supervision. SIGIR 2017: 65-74
- [Faruqui2014] Faruqui M., Dodge J., Jauhar S. K., Dyer C., Hovy E.,
Smith N. A., « Retrofitting Word Vectors to Semantic Lexicons », NAACL, 2014
- [Moreno2017] Moreno, J. G., Besançon, R., Beaumont, R., D’hondt, E.,
Ligozat, A. L., Rosset, S., Grau, B. (2017, Combining word and entity
embeddings for entity linking. In Extended Semantic Web Conference
(ESWC) pp. 337-352, 2017
- [Nickel2017] Nickel, M., & Kiela, D. Poincaré embeddings for learning
hierarchical representations. In Advances in Neural Information
Processing Systems (pp. 6341-6350), 2017.
- [Nguyen2017] Nguyen, G. H., Tamine, L., Soulier, L., & Souf, N. (2017,
June). Learning Concept-Driven Document Embeddings for Medical
Information Search. In Conference on Artificial Intelligence in Medicine
in Europe (pp. 160-170). Springer, Cham
- [Nguyen2018] Gia Nguyen, Lynda Tamine, Laure Soulier, Nathalie Souf, A
Tri-Partite Neural Document Language Model for Semantic Information
Retrieval. In Extended Semantic Web Conference (ESWC), 2018
[Yu2014] Yu M., Dredze M., « Improving Lexical Embeddings with
Semantic Knowledge », ACL, p. 545- 550, 2014
- [Wang2014] Wang Z., Zhang J., Feng J., Chen Z., « Knowledge Graph and
Text Jointly Embedding », EMNLP, p. 1591- 1601, 2014
- [Yamada2016] Yamada, I., Shindo, H., Takeda, H., Takefuji, Y., « Joint
Learning of the Embedding of Words and Entities for Named Entity
Disambiguation », CoNLL, p. 250-259, 2016
[1704.08803] Neural Ranking Models with Weak Supervision (2017)(About) Main Idea: To leverage large amounts of unsupervised data to infer “weak” labels and use that signal for learning supervised models as if we had the ground truth labels. See [blog post](/doc/?uri=http%3A%2F%2Fmostafadehghani.com%2F2017%2F04%2F23%2Fbeating-the-teacher-neural-ranking-models-with-weak-supervision%2F):
> This is truly awesome since we have only used BM25 as the supervisor to train a model which performs better than BM25 itself!
[1601.01343] Joint Learning of the Embedding of Words and Entities for Named Entity Disambiguation (2016)(About) > An embedding method specifically designed for NED that jointly **maps words and entities into the same continuous vector space**.
> We extend the skip-gram model by using two models. The KB graph model learns the relatedness of entities using the link structure of the KB, whereas the anchor context model aims to align vectors such that similar words and entities occur close to one another in the vector space by leveraging KB anchors and their context words
A Tri-Partite Neural Document Language Model for Semantic Information Retrieval (2018 - ESWC conference)(About) from the abstract: Previous work in information retrieval have shown that using evidence, such as concepts and relations, from external knowledge sources could enhance the retrieval performance... This paper presents a new tri-partite neural document language framework that leverages explicit knowledge to jointly constrain word, concept, and document learning representations to tackle a number of issues including polysemy and granularity mismatch.
Retrofitting Word Vectors to Semantic Lexicons (2014)(About) Method for refining vector space representations using relational information from semantic lexicons **by encouraging linked words to have similar vector representations**, and it makes no assumptions about how the input vectors were constructed.
Graph-based learning technique for using lexical relational resources to obtain higher quality semantic vectors, which we call “retrofitting.” Retrofitting is applied as a **post-processing step** by running belief propagation on a graph constructed from lexicon-derived relational information to update word vectors. This allows retrofitting to be used on pre-trained word vectors obtained using any vector training model.
Knowledge Graph and Text Jointly Embedding (2014)(About) method of jointly embedding knowledge graphs and a text corpus so that entities and words/phrases are represented in the same vector space.
Promising improvement in the accuracy of predicting facts, compared to separately embedding knowledge graphs and text (in particular, enables the prediction of facts containing entities out of the knowledge graph)
[cité par J. Moreno](/doc/?uri=https%3A%2F%2Fhal.archives-ouvertes.fr%2Fhal-01626196%2Fdocument)
Combining word and entity embeddings for entity linking (ESWC 2017)(About) The general approach for the entity linking task is to generate, for a given mention, a set of candidate entities from the base and, in a second step, determine which is the best
one. This paper proposes a novel method for the second step which is
based on the **joint learning of embeddings for the words in the text and
the entities in the knowledge base**.
Poincaré Embeddings for Learning Hierarchical Representations(About) > While complex symbolic datasets often exhibit a latent hierarchical structure, state-of-the-art methods typically learn embeddings in Euclidean vector spaces, which do not account for this property. For this purpose, we introduce a new approach for learning hierarchical representations of symbolic data by embedding them into hyperbolic space