GANs入门系列一

参考博客1
参考博客2
本文只是博客1的一个个人阅读记录,可忽略本文而直接看博客1/2。博客1主要是从一个比较泛的角度来阐述什么是GANs,而具体细节可以参考博客2。个人建议先看完博客1,然后再看博客2,加深理解。

Advantages:

  1. can learn to mimic any distribution of data
  2. be robot artists in a sense, and its output is impressive

Disadvantages:

  1. need long time to train (several hours on a single GPU, while a day on a single CPU)
  2. be difficult to be tuned

Generative vs. Discriminative Algorithms

  • Discriminative Algorithms
    • try to classify input data. given the features of a data instance X X X, it predict a label or category Y Y Y to which that data belongs (map: X → Y X \to Y XY)
    • Discriminative models learn the boundary between classes
  • Generative Algorithm
    • attempt to predict features given a certain label (map: Y → X Y \to X YX)
    • Generative models model the distribution of individual classes

GANs

GANs = generator + discriminator

  • generator: generates new data instances
    • Goal: generate X X X to lie to discriminator without being caught
  • discriminator: evaluates the data instances for authenticity (decides whether each instance of data it reviews belongs to the actual training dataset or not)
    • Goal: to identify X X X from generator as fake
  • both need be trained alternately

Example:

  • discriminator
    • a standard convolutional network classifying a fed image into real or fake one
    • takes an image and downsamples it to produce a probability
  • generator
    • a inverse convolutional network
    • takes a vector of random noise and upsamples it to an image

GANs, Autoencoders and VAEs

Autoencoders: encode input data as vectors, creating a hidden/compressed representation of the raw data

VAEs(Variational autoencoders)

  • generative algorithm that add an additional constraint to encoding the input data, namely that the hidden representations are normalized
  • be capable of both compressing data like an autoencoder and synthesizing data like a GAN
  • images generateb by GANs have fine, granular detail, while are more blurred by VAEs

Train

GANs is trained alternately/in a static adversary way (when train the discriminator, freeze the generator and vice versa)

Each side of GAN can overpower the other. If the discriminator is too good, it will return values so close to 0 or 1 that the generator will struggle to read the gradient. If the generator is too good, it will persistently exploit weaknesses in the discriminator that lead to false negatives. This may be mitigated by the nets’ respective learning rates.

# GANs 伪代码
# ComplieModel = Generator + Discriminator
# Process in CombinedModel: noise as input => generates images => determines validity
for epcho in epchos:
	# Train Discriminator
	real_imgs = ... # come from label datas
	fake_imgs1 = Generator.predict(noise) # the prediction coming from Generator 
										  # (just predict, no train it)
	Discriminator.train(x=real_imgs, y=[True])   # True here for real images
	Discriminator.train(x=fake_imgs1, y=[False]) # False here for fake images
	# Train Generator
	Discriminator.trainable = False
	fake_imgs2 = noise_matrix # not sure here that whether fake_imgs2 are fake_imgs1
	CombinedModel.train(x=fake_imgs2, y=[True])  # True here for fake images

在这里插入图片描述

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值