Evolutionary Generative Adversarial Networks

进化生成的对抗性网络

Chaoyue Wang , Chang Xu  , Xin Yao , Dacheng TaoCentre for Artificial Intelligence, School of Software, University of Technology Sydney, Australia  UBTECH Sydney AI Centre, School of IT, FEIT, The University of Sydney, Australia   Department of Computer Science and Engineering,
Southern University of Science and Technology, China School of Computer Science, University of Birmingham, U.K. chaoyue.wang@student.uts.edu.au, c.xu@sydney.edu.aux.yao@cs.bham.ac.uk, dacheng.tao@sydney.edu.au

Abstract

In this paper, we propose a novel GAN framework called evolutionary generative adversarial networks (E-GAN) for stable GAN training and improved generative performance. Unlike existing GANs, which employ a pre-defined adversarial objective function alternately training a generator and a discriminator, we utilize different adversarial training objectives as mutation operations and evolve a population of generators to adapt to the environment (i.e., the discriminator). We also utilize an evaluation mechanism to measure the quality and diversity of generated samples, such that only well-performing generator(s) are preserved and used for further training.
在本文中,我们提出了一种新的GAN框架,称为进化生成对抗网络(E-GAN),用于稳定的GAN训练和提高生成性能。现有的GANs使用预定义的对抗目标函数交替训练生成器和鉴别器,我们使用不同的对抗训练目标作为突变操作,并进化出一群生成器来适应环境(即鉴别器)。我们还利用一种评估机制来衡量生成的样本的质量和多样性,这样只有性能良好的生成器(s)被保留并用于进一步的训练。
we devise a framework that utilizes different metrics to jointly optimize the generator. In doing so, we improve both the training stability and generative performance. We build an evolutionary generative adversarial network (E-GAN), which treats the adversarial training procedure as an evolutionary problem. Specifically, a discriminator acts as the environment ( i . e., provides adaptive loss functions) and a population of generators evolve in response to the environment. During each adversarial (or evolutionary) iteration, the discriminator is still trained to recognize real and fake samples. However, in our method, acting as parents, generators undergo different mutations to produce offspring to adapt to the environment. Different adversarial objective functions aim to minimize different distances between the generated distribution and the data distribution, leading to different mutations. Meanwhile, given the current optimal discriminator, we measure the quality
and diversity of samples generated by the updated offspring. Finally, according to the principle of “survival of the fittest”, poorly-performing offspring are removed and the remaining well-performing offspring ( i . e., generators) are preserved and used for further training.
我们设计了一个框架,利用不同的指标来共同优化生成器。通过这样做,我们提高了训练的稳定性和生成性能。我们建立了一个进化生成的对抗性网络(E-GAN),它将对抗性训练过程视为一个进化问题。具体来说,鉴别器作为环境(即提供适应性损失函数),generate群体随着环境而进化。在每次对抗(或进化)迭代中,鉴别器仍然被训练来识别真实和虚假的样本。然而,在我们的方法中,作为父代,generators经历不同的突变来产生后代以适应环境。不同的对抗性目标函数旨在最小化生成的分布和数据分布之间的不同距离,导致不同的突变。同时,在考虑当前最优鉴别器的情况下,测量更新后后代产生的样本的质量和多样性。最后,根据“适者生存”的原则,将表现不佳的后代剔除,保留剩余的表现良好的后代(即generator),用于进一步的训练。
Related Works
1、 Generative Adversarial Networks
Generative adversarial networks (GAN) provides an excellent framework for learning deep generative models, which aim to capture probability distributions over the given data. Compared to other generative models GAN is easily trained by alternately updating a generator and a discriminator using the back-propagation algorithm. In many generative tasks, GANs (GAN and its variants) produce better samples than other generative models
生成对抗网络(GAN)为学习深度生成模型提供了一个极好的框架,其目的是捕获给定数据上的概率分布。与其他生成模型相比,通过使用反向传播算法交替更新生成器和鉴别器,很容易训练GAN。在许多生成任务中,GANs(GAN及其变体)比其他生成模型产生更好的样本
2、Evolutionary Algorithms
Recently, evolutionary algorithms have been introduced to solve deep learning problems. To minimize human participation in designing deep algorithms and automatically discover such configurations, there have been many attempts to optimize deep learning hyper-parameters and design deep network architectures through an evolutionary search [35, 20, 25]. Evolutionary algorithms have also demonstrated their capacity to optimize deep neural networks [ 15 , 34]. Moreover, [27] proposed a novel evolutionary strategy as an alternative to the popular MDP-based reinforcement learning (RL) techniques, achieving strong performance on RL benchmarks. Last but not least, an evolutionary algorithm was proposed to compress deep learning models by automatically eliminating redundant convolution filters [33].
近年来,进化算法被引入来解决深度学习问题。为了最小化人类参与设计深度算法并自动发现此类配置,人们已经多次尝试通过进化搜索[35,20,25]来优化深度学习超参数和设计深度网络架构。进化算法也证明了它们优化深度神经网络[15,34]的能力。此外,[27]提出了一种新的进化策略来替代流行的基于MDP的强化学习(RL)技术,在RL基准上取得了强大的性能。最后,提出了一种通过自动消除冗余卷积滤波器[33]来压缩深度学习模型的进化算法。
Method
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值