Parents:

Deep latent variable models

Deep latent variable models assume a generative process whereby a simple random variable is transformed from the latent space to the observed, output space through a deep neural network. Generative Adversarial Networks (GAN) and Variational Autoencoders (VAE) are two of the most popular variants of this approach

Related Tags:

2 Documents (Long List)

- Deep Latent-Variable Models for Natural Language - Tutorial - harvardnlp
*(About)*

[arxiv](https://arxiv.org/abs/1812.06834.pdf)

2018-11-01 - [Seminar] Deep Latent Variable Models of Natural Language
*(About)*

Both GANs and VAEs have been remarkably effective at modeling images, and the learned latent representations often correspond to interesting, semantically-meaningful representations of the observed data. In contrast, GANs and VAEs have been less successful at modeling natural language, but for different reasons. - GANs have difficulty dealing with discrete output spaces (such as natural language) as the resulting objective is no longer differentiable with respect to the generator. - VAEs can deal with discrete output spaces, but when a powerful model (e.g. LSTM) is used as a generator, the model learns to ignore the latent variable and simply becomes a language model.

2018-10-31

Properties

- sl:creationDate : 2018-10-31
- sl:creationTime : 2018-10-31T22:53:17Z
- rdf:type : sl:Tag
- skos:prefLabel : Deep latent variable models