Glow: Generative Flow with Invertible 1 × × 1 Convolutions
Diederik P. Kingma, Prafulla Dhariwal
Abstract
flow-based generative models: tractability of the exact log-likelihood, tractability of exact latent-variable inference, and parallelizability of both training and synthesis
Glow: a simple type of generative flow using an invertible 1 × × 1 convolution
capable of efficient realistic-looking synthesis and manipulation of large images
code:https://github.com/openai/glow
Introduction
two major unsolved problems of machine learning:
(1) data-efficiency: the ability to learn from few datapoints, like humans;
(2) generalization: robustness to changes of the task or its context
generative models:
(1) learning realistic world models
(2) learning meaningful features of the input while requiring little or no human supervision or labeling
merits of flow-based generative models:
1. Exact latent-variable inference and log-likelihood evaluation
2. Efficient inference and efficient synthesis
3. Useful latent space for downstream tasks
4. Significant potential for memory savings
Background: Flow-based Generative Models
x x : high dimensional random vector with unknown true distribution
D D : i.i.d dataset from
pθ(x) p θ ( x ) : model
log-likelihood objective(the expected compression cost):
z z : latent variable with tractable/simple pdf