Chapter 4. Generative Adversarial Networks

这章感觉没啥可写的,因为之前就对GAN略微有所了解,然后这章内容也就是最基础的一些关于GAN的内容🤣🤣🤣但是不写这一章,自己心里又难受,因此还是随便整理出这篇笔记吧,记录一下重点

书中对GAN给出的定义:Simply put, a GAN is a battle between two adversaries, the generator and the discriminator. The generator tries to convert random noise into observations that look as if they have been sampled from the original dataset and the discriminator tries to predict whether an observation comes from the original dataset or is one of the generator’s forgeries.

The key to GANs lies in how we alternate the training of the two networks, so that as the generator becomes more adept at fooling the discriminator, the discriminator must adapt in order to maintain its ability to correctly identify which observations are fake.

在这里插入图片描述
在这里插入图片描述
然后列举了几个GAN中会存在的问题:

  • Oscillating Loss(震荡损失),意思就是loss不稳定在这里插入图片描述
  • Mode Collapse(模式坍缩)这个我自己感觉还挺有意思的。Mode collapse occurs when the generator finds a small number of samples that fool the discriminator and therefore isn’t able to produce any examples other than this limited set.
  • Uninformative Loss。 This lack of correlation between the generator loss and image quality sometimes makes GAN training difficult to monitor. 就是损失函数其实并不能反映产生图片的质量,因为判别器的质量一直在优化。可能即使在现在的判别器上loss值较大,但是生成图片的质量比之前要好很多
  • Hyperparameters. 对超参数特别敏感。GANs are highly sensitive to very slight changes in all of these parameters,

那怎么去解决这些问题呢,当然是对GAN的某些部分进行一些改进。后续对GAN进行改进,本书中分享了两种改进的模型:

  • Wasserstein GAN (WGAN)
  • Wasserstein GAN–Gradient Penalty (WGAN-GP)

具体咋整的,没大看懂,日后再来填坑吧🤦‍♂️👀

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

INEVGVUP

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值