How does the generator learn what class to generate?
The discriminator is checking if an image looks real or fake based on (conditioned on) a certain class.
Correct! The discriminator also receives the class label and will classify the images based on if they look like real images from that
specific class or not.
How is adding the class information different for the discriminator and generator, and why?
For the discriminator, the class information is appended as a channel or some other method, whereas for the generator, the class is encoded by appending a one-hot vector to the noise to form a long vector input.
Correct! Both the discriminator and generator receive the class information appended to their traditional inputs for conditional
generation.
What is a key difference between controllable generation and conditional generation?
Controllable generation is done after training by modifying the z-vectors passed to the generator while conditional generation is done during training and requires a labelled dataset.
Correct! While conditional generation leverages labels during training, controllable generation controls what features you want in the output examples after the model has been trained…
How are controllable generation and interpolation similar?
They both change features by adapting values of the z-vector.
Correct! Both are done by manipulating the noise vector z.
When does controllable generation commonly fail?
When features strongly correlate with each other and z-values don’t correspond to clear mappings on their images.
Correct! When the z-space is entangled, movement in the different directions has an effect on multiple features simultaneously in the
output. This makes it difficult to control a single feature without
modifying others.
How can you use a classifier for controllable generation?
You can calculate the gradient of the z-vectors along certain features through the classifier to find the direction to move the z-vectors.
Correct! Pre-trained classifiers can be used to find directions in the z-space associated with features in the output of GANs. Remember
that to find those directions, you need to modify the noise vectors
without changing the generator.
What is the purpose of disentangling models?
To correspond values in a z-vector to meaningful features.
Correct! In a disentangled z-space, specific indices on the noise vector correspond to a singular feature in the output.