AutoEncoder
Limitations of autoencoders for content generation
At this point, a natural question that comes in mind is “what is the link between autoencoders and content generation?”. Indeed, once the autoencoder has been trained, we have both an encoder and a decoder but still no real way to produce any new content. At first sight, we could be tempted to think that, if the latent space is regular enough (well “organized” by the encoder during the training process), we could take a point randomly from that latent space and decode it to get a new content. The decoder would then act more or less like the generator of a Generative Adversarial Network.
转自:
VAE : https://towardsdatascience.com/understanding-variational-autoencoders-vaes-f70510919f73
GAN: https://towardsdatascience.com/understanding-generative-adversarial-networks-gans-cd6e4651a29