Wikipedia
Variational autoencoder (VAE)

Same architecture as autoencoder, but make strong assumptions concerning the distribution of latent variables. They use variational approach for latent representation learning ("Stochastic Gradient Variational Bayes" (SGVB) training algorithm)

Related Tags:

3 Documents (Long List)

- The Kanerva Machine: A Generative Distributed Memory | OpenReview (2018)
*(About)*

A generative memory model that combines slow-learning neural networks and a fast-adapting linear Gaussian model as memory

2018-12-06 - [Seminar] Deep Latent Variable Models of Natural Language
*(About)*

Both GANs and VAEs have been remarkably effective at modeling images, and the learned latent representations often correspond to interesting, semantically-meaningful representations of the observed data. In contrast, GANs and VAEs have been less successful at modeling natural language, but for different reasons. - GANs have difficulty dealing with discrete output spaces (such as natural language) as the resulting objective is no longer differentiable with respect to the generator. - VAEs can deal with discrete output spaces, but when a powerful model (e.g. LSTM) is used as a generator, the model learns to ignore the latent variable and simply becomes a language model.

2018-10-31 - What a Disentangled Net We Weave: Representation Learning in VAEs (Pt. 1)
*(About)*

2018-05-29

Properties

- sl:creationDate : 2018-05-29
- sl:creationTime : 2018-05-29T15:06:15Z
- sl:describedBy : https://en.wikipedia.org/wiki/Autoencoder#Variational_autoencoder_(VAE)
- rdf:type : sl:Tag
- skos:altLabel : VAE
- skos:prefLabel : Variational autoencoder (VAE)