Wikipedia Word embeddings
A set of language modeling and feature learning techniques where words from the vocabulary (and possibly phrases thereof) are mapped to vectors of real numbers in a low dimensional space, relative to the vocabulary size. ~ Context-predicting models ~ Latent feature representations of words Paramaterized function mapping words in some language to vectors (perhaps 200 to 500 dimensions). Conceptually it involves a mathematical embedding from a space with one dimension per word to a continuous vector space with much lower dimension. "Plongement lexical" in French Methods to generate the mapping include neural networks, dimensionality reduction on the word co-occurrence matrix, probabilistic models, and explicit representation in terms of the context in which words appear. In the new generation of models, the vector estimation problem is handled as a supervised task, where the weights in a word vector are set to maximize the probability of the contexts in which the word is observed in the corpus The mapping may be generated training a neural network on a large corpus to predict a word given a context (Continuous Bag Of Words model) or to predict the context given a word (skip gram model). The context is a window of surrounding words. The most known software to produce word embeddings is Tomas Mikolov's Word2vec. Pre-trained word embeddings are also available in the word2vec code.google page. Applications: - search document ranking - boost the performance in NLP tasks such as syntactic parsing and sentiment analysis.
ExpandDescendants
32 Documents (Long List
Properties