介绍GAN

Discriminative vs. Generative models


鉴别器 和 生成模型

Before looking at GANs, let’s briefly review the difference between generative and discriminative models:

A discriminative model learns a function that maps the input data (x) to some desired output class label (y). In probabilistic terms, they directly learn the conditional distribution P(y|x).


A generative model tries to learn the joint probability of the input data and labels simultaneously, i.e. P(x,y). This can be converted to P(y|x) for classification via Bayes rule, but the generative ability could be used for something else as well, such as creating likely new (x, y) samples.生成模型试图同时学习输入数据和标签的联合分布。P(x,y)。 这个可以通过贝叶斯方法转换成用于分类的P(y|x)。 但是,生成能力也可以有其它的用途,比如,生成新的样本。


Both types of models are useful, but generative models have one interesting advantage over discriminative models 生成模型有一个比鉴别模型更有趣的优势– they have the potential to understand and explain the underlying structure of the input data even when there are no labels那就是,生成模型有这样一种潜能,即使在数据没有被标签的情况下, 生成模型都可以解释输入数据的隐含结构. This is very desirable when working on data modelling problems in the real world, as unlabelled data is of course abundant, but getting labelled data is often expensive at best and impractical at worst.


Generative Adversarial Networks

GANs are an interesting idea that were first introduced in 2014 by a group of researchers at the University of Montreal lead by Ian Goodfellow (now at OpenAI). The main idea behind a GAN is to have two competing neural network models(GAN 背后的主要思想是, 有两个竞争的神经网络模型). One takes noise as input and generates samples (and so is called the generator)(一个是输入噪声来产生样本,也就是所谓的生成模型). The other model (called the discriminator) receives samples from both the generator and the training data, and has to be able to distinguish between the two sources(另外一个是鉴别模型,用于接收来自于生成模型和训练数据的样本,同时要能够区分这两种数据来源). These two networks play a continuous game, where the generator is learning to produce more and more realistic samples, and the discriminator is learning to get better and better at distinguishing generated data from real data(这两种网络模型做一个连续的游戏,生成器学习并且产生更多的近乎真实的样本,同时鉴别器学习如何更好的从真实的训练数据中区分出生成模型生成的样本). These two networks are trained simultaneously, and the hope is that the competition will drive the generated samples to be indistinguishable from real data(同时训练这两个网络,同时希望这种竞争能够促使生成的样本很难从真实数据中区别开来).


The analogy that is often used here is that the generator is like a forger trying to produce some counterfeit material, and the discriminator is like the police trying to detect the forged items. This setup may also seem somewhat reminiscent of reinforcement learning(这种setup让人容易回想起强化学习), where the generator is receiving a reward signal from the discriminator letting it know whether the generated data is accurate or not.(生成器从鉴别器获得reword signal,并让鉴别器知道生成的数据是否精确。) The key difference with GANs however is that we can backpropagate gradient information from the discriminator back to the generator network, so the generator knows how to adapt its parameters in order to produce output data that can fool the discriminator.(强化学习和GAN不同的地方是, GAN可以从鉴别模型反向传播梯度信息到生成模型,那么生成模型就知道怎样调整参数来产生能够愚弄鉴别模型的数据。)

Approximating a 1D Gaussian distribution







  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
高晋占 微弱信号检测是发展高新技术、探索及发现新的自然规律的重要手段,对推动很多领域 的发展具有重要的应用价值。将淹没在强背景噪声的微弱信号,运用电子学和近代信号处理手段抑制噪声,进而从噪声提取和恢复有用的微弱信号,是本书的主要内容。本书涉及利用随机噪声理论分析和解释电子系统内部噪声和外部干扰噪声的产生和传播问题,并详细介绍各种不同噪声的抑制方法,以及锁相放大、采样积分、相关检测、自适应降噪等应用技术。 本书可作为自动化、电子工程、物理、生物医学工程、测试技术与仪器等专业的研究生和高年级本科生的教材,也可供涉及电子噪声、低噪声设计、电磁兼容性、微弱信号检测的工程技术人员参考。 微弱信号是淹没在噪声的信号,微弱信号检测的主要目的是提高信噪比。本书研究噪声的来源和统计特性,分析噪声产生的原因和规律,运用电子学和信号处理方法检测被噪声覆盖的微弱信号,并介绍几种行之有效的微弱信号检测方法和技术。 全书共分7章,主要内容有:微弱信号检测与随机噪声、放大器的噪声源和噪声特性、干扰噪声及其抑制、锁定放大、取样积分与数字式平均、相关检测、自适应噪声抵消。 本书可作为自动化、电子工程、物理、化学、生物医学工程、测试技术与仪器等专业的研究生和高年级本科生教材,也可供有关专业丁程技术人员自学和参考。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值