CycleGAN详解与实现(采用tensorflow2,附大厂真题面经

from tensorflow import keras

import datetime

import argparse

from tensorflow_addons.layers import InstanceNormalization

生成器

def encoder_layer(inputs,

filters=16,

kernel_size=3,

strides=2,

activation=‘leaky_relu’,

instance_normal=True):

“”"encoder layer

Conv2D-IN-LeakyReLU, IN is optional

“”"

conv = keras.layers.Conv2D(filters=filters,

kernel_size=kernel_size,

strides=strides,

padding=‘same’)

x = inputs

if instance_normal:

x = InstanceNormalization(axis=3)(x)

if activation == ‘relu’:

x = keras.layers.Activation(‘relu’)(x)

else:

x = keras.layers.LeakyReLU(alpha=0.2)(x)

x = conv(x)

return x

def decoder_layer(inputs,

paired_inputs,

filters=16,

kernel_size=3,

strides=2,

activation=‘leaky_relu’,

instance_normal=True):

“”"decoder layer

Conv2D-IN-LeakyReLU, IN is optional

Arguments: (partial)

inputs (tensor): the decoder layer input

paired_inputs (tensor): the encoder layer output

provided by U-Net skip connection & concatenated to inputs.

“”"

conv = keras.layers.Conv2DTranspose(filters=filters,

kernel_size=kernel_size,

strides=strides,

padding=‘same’)

x = inputs

if instance_normal:

x = InstanceNormalization(axis=3)(x)

if activation == ‘relu’:

x = keras.layers.Activation(‘relu’)(x)

else:

x = keras.layers.LeakyReLU(alpha=0.2)(x)

x = conv(x)

x = keras.layers.concatenate([x,paired_inputs])

return x

def build_generator(input_shape,

output_shape=None,

kernel_size=3,

name=None):

“”"The generator is a U-Network made of a 4-layer encoder and

a 4-layer decoder. Layer n-i is connected to layer i.

Arguments:

input_shape (tuple): input shape

output_shape (tuple): output shape

kernel_size (int): kenel size of encoder $ decoder layers

name (string): name assigned to generator model

Returns:

generator (model)

“”"

inputs = keras.layers.Input(shape=input_shape)

channals = int(output_shape[-1])

e1 = encoder_layer(inputs,32,kernel_size=kernel_size,strides=1)

e2 = encoder_layer(e1,64,kernel_size=kernel_size)

e3 = encoder_layer(e2,128,kernel_size=kernel_size)

e4 = encoder_layer(e3,256,kernel_size=kernel_size)

d1 = decoder_layer(e4,e3,128,kernel_size=kernel_size)

d2 = decoder_layer(d1,e2,64,kernel_size=kernel_size)

d3 = decoder_layer(d2,e1,32,kernel_size=kernel_size)

outputs = keras.layers.Conv2DTranspose(channals,

kernel_size=kernel_size,

strides=1,

activation=‘sigmoid’,

padding=‘same’)(d3)

generator = keras.Model(inputs,outputs,name=name)

return generator

鉴别器

CycleGAN的鉴别器类似于原始GAN鉴别器。输入图像被下采样数次。 最后一层是Dense(1)层,它预测输入为真实图片的概率。除了不使用IN之外,每一层都类似于生成器的编码器层。但是,在大图像中,用一个概率将图像分类为真实或伪造会导致参数更新效率低下,并导致生成的图像质量较差。

解决方案是使用PatchGAN,该方法将图像划分为patch网格,并使用标量值网格来预测patch是真实图片概率。

PatchPatchGAN并没有在CycleGAN中引入一种新型的GAN。 为了提高生成的图像质量,不是仅输出一个

鉴别结果,如果使用2 x 2 PatchGAN,有四个输出结果。损失函数没有变化。

patchGAN

def build_discriminator(input_shape,

kernel_size=3,

patchgan=True,

name=None):

“”"The discriminator is a 4-layer encoder that outputs either

a 1-dim or a n * n-dim patch of probility that input is real

Arguments:

input_shape (tuple): input shape

kernel_size (int): kernel size of decoder layers

patchgan (bool): whether the output is a patch or just a 1-dim

name (string): name assigned to discriminator model

Returns:

discriminator (model)

“”"

inputs = keras.layers.Input(shape=input_shape)

x = encoder_layer(inputs,

32,

kernel_size=kernel_size,

instance_normal=False)

x = encoder_layer(x,

64,

kernel_size=kernel_size,

instance_normal=False)

x = encoder_layer(x,

128,

kernel_size=kernel_size,

instance_normal=False)

x = encoder_layer(x,

256,

kernel_size=kernel_size,

instance_normal=False)

if patchgan:

x = keras.layers.LeakyReLU(alpha=0.2)(x)

outputs = keras.layers.Conv2D(1,

kernel_size=kernel_size,

strides=2,

padding=‘same’)(x)

else:

x = keras.layers.Flatten()(x)

x = keras.layers.Dense(1)(x)

outputs = keras.layers.Activation(‘linear’)(x)

discriminator = keras.Model(inputs,outputs,name=name)

return discriminator

CycleGAN

使用生成器和鉴别器构建CycleGAN。实例化了两个生成器g_source = F F F和g_target = G G G以及两个鉴别器d_source = D x D_x Dx​和d_target = D y D_y Dy​。前向循环是 x ′ = F ( G ( x ) ) x’=F(G(x)) x′=F(G(x))= reco_source = g_source(g_target(source_input))。反向循环是 y ′ = G ( F ( y ) ) y’=G(F(y)) y′=G(F(y))= reco_target = g_target(g_source(target_input))。

对抗模型的输入是源数据和目标数据,而输出是 D x D_x Dx​和 D y D_y Dy​的以及输入的重构 x ′ x’ x′和 y ′ y’ y′。由于灰度图像和彩色图像中通道数之间的差异,未使用标识网络。对于GAN和循环一致性损失,分别使用损失权重 λ 1 = 1.0 \lambda_1=1.0 λ1​=1.0和 λ 2 = 10.0 \lambda_2=10.0 λ2​=10.0。使用RMSprop作为鉴别器器的优化器,其学习率为2e-4,衰减率为6e-8。对抗网络的学习率和衰退率是鉴别器的一半。

def build_cyclegan(shapes,

source_name=‘source’,

target_name=‘target’,

kernel_size=3,

patchgan=False,

identity=False):

“”"CycleGAN

  1. build target and source discriminators

  2. build target and source generators

  3. build the adversarial network

Arguments:

shapes (tuple): source and target shapes

source_name (string): string to be appended on dis/gen models

target_name (string): string to be appended on dis/gen models

kernel_size (int): kernel size for the encoder/decoder

or dis/gen models

patchgan (bool): whether to use patchgan on discriminator

identity (bool): whether to use identity loss

returns:

list: 2 generator, 2 discriminator, and 1 adversarial models

“”"

source_shape,target_shape = shapes

lr = 2e-4

decay = 6e-8

gt_name = ‘gen_’ + target_name

gs_name = ‘gen_’ + source_name

dt_name = ‘dis_’ + target_name

ds_name = ‘dis_’ + source_name

#build target and source generators

g_target = build_generator(source_shape,

target_shape,

kernel_size=kernel_size,

name=gt_name)

g_source = build_generator(target_shape,

source_shape,

kernel_size=kernel_size,

name=gs_name)

print(‘----TARGET GENERATOR----’)

g_target.summary()

print(‘----SOURCE GENERATOR----’)

g_source.summary()

#build target and source discriminators

d_target = build_discriminator(target_shape,

patchgan=patchgan,

kernel_size=kernel_size,

name=dt_name)

d_source = build_discriminator(source_shape,

patchgan=patchgan,

kernel_size=kernel_size,

name=ds_name)

print(‘----TARGET DISCRIMINATOR----’)

d_target.summary()

print(‘----SOURCE DISCRIMINATOR----’)

d_source.summary()

optimizer = keras.optimizers.RMSprop(lr=lr,decay=decay)

d_target.compile(loss=‘mse’,

optimizer=optimizer,

metrics=[‘acc’])

d_source.compile(loss=‘mse’,

optimizer=optimizer,

metrics=[‘acc’])

d_target.trainable = False

d_source.trainable = False

#the adversarial model

#forward cycle network and target discriminator

source_input = keras.layers.Input(shape=source_shape)

fake_target = g_target(source_input)

preal_target = d_target(fake_target)

reco_source = g_source(fake_target)

#backward cycle network and source discriminator

target_input = keras.layers.Input(shape=target_shape)

fake_source = g_source(target_input)

preal_source = d_source(fake_source)

reco_target = g_target(fake_source)

if identity:

iden_source = g_source(source_input)

iden_target = g_target(target_input)

loss = [‘mse’,‘mse’,‘mae’,‘mae’,‘mae’,‘mae’]

loss_weights = [1.,1.,10.,10.,0.5,0.5]

inputs = [source_input,target_input]

outputs = [preal_source,

preal_target,

reco_source,

reco_target,

iden_source,

iden_target]

else:

loss = [‘mse’,‘mse’,‘mae’,‘mae’]

loss_weights = [1.0,1.0,10.0,10.0]

inputs = [source_input,target_input]

outputs = [preal_source,preal_target,reco_source,reco_target]

#build

adv = keras.Model(inputs,outputs,name=‘adversarial’)

optimizer = keras.optimizers.RMSprop(lr=lr0.5,decay=decay0.5)

adv.compile(loss=loss,

loss_weights=loss_weights,

optimizer=optimizer,

metrics=[‘acc’])

print(‘----ADVERSARIAL NETWORK----’)

adv.summary()

return g_source,g_target,d_source,d_target,adv

加载与处理数据

def rgb2gray(rgb):

“”"Convert from color image to grayscale

Formula: grayscale = 0.299 * red + 0.587 * green + 0.114 * blue

“”"

return np.dot(rgb[…,:3],[0.299,0.587,0.114])

def display_images(imgs,

filename,

title=‘’,

imgs_dir=None,

show=False):

“”"Display images in an n*n grid

Arguments:

imgs (tensor): array of images

filename (string): filename to save the displayed image

title (string): title on the displayed image

imgs_dir (string): directory where to save the files

show (bool): whether to display the image or not

“”"

rows = imgs.shape[1]

cols = imgs.shape[2]

channels = imgs.shape[3]

side = int(math.sqrt(imgs.shape[0]))

assert int(side * side) == imgs.shape[0]

#create saved_images folder

if imgs_dir is None:

imgs_dir = ‘saved_images’

save_dir = os.path.join(os.getcwd(),imgs_dir)

if not os.path.isdir(save_dir):

os.makedirs(save_dir)

filename = os.path.join(imgs_dir,filename)

if channels == 1:

imgs = imgs.reshape((side,side,rows,cols))

else:

imgs = imgs.reshape((side,side,rows,cols,channels))

imgs = np.vstack([np.hstack(i) for i in imgs])

plt.figure()

plt.axis(‘off’)

plt.title(title)

if channels==1:

plt.imshow(imgs,interpolation=‘none’,cmap=‘gray’)

else:

plt.imshow(imgs,interpolation=‘none’)

plt.savefig(filename)

if show:

plt.show()

plt.close(‘all’)

def test_generator(generators,

test_data,

step,

titles,

dirs,

todisplay=100,

show=False):

“”"Test the generator models

Arguments:

generator (tuple): source and target generators

test_date (tuple): source and target test data

step (int): step number during training (0 during testing)

titles (tuple): titles on the displayed image

dirs (tuple): folders to save the outputs on testings

todisplay (int): number of images to display

show (bool): whether to display the image or not

“”"

#predict the output from test data

g_source,g_target = generators

test_source_data,test_target_data = test_data

t1,t2,t3,t4 = titles

title_pred_source = t1

title_pred_target = t2

title_reco_source = t3

title_reco_target = t4

dir_pred_source,dir_pred_target = dirs

pred_target_data = g_target.predict(test_source_data)

pred_source_data = g_source.predict(test_target_data)

reco_target_data = g_source.predict(pred_target_data)

reco_source_data = g_target.predict(pred_source_data)

#display the 1st todisplay images

imgs = pred_target_data[:todisplay]

filename = ‘%06d.png’ % step

step = ‘step: {:,}’.format(step)

title = title_pred_target + step

display_images(imgs,

filename=filename,

imgs_dir=dir_pred_target,

title=title,

show=show)

imgs = pred_source_data[:todisplay]

title = title_pred_source

display_images(imgs,

filename=filename,

学好 Python 不论是就业还是做副业赚钱都不错,但要学会 Python 还是要有一个学习规划。最后大家分享一份全套的 Python 学习资料,给那些想学习 Python 的小伙伴们一点帮助!

一、Python所有方向的学习路线

Python所有方向路线就是把Python常用的技术点做整理,形成各个领域的知识点汇总,它的用处就在于,你可以按照上面的知识点去找对应的学习资源,保证自己学得较为全面。

二、学习软件

工欲善其事必先利其器。学习Python常用的开发软件都在这里了,给大家节省了很多时间。

三、全套PDF电子书

书籍的好处就在于权威和体系健全,刚开始学习的时候你可以只看视频或者听某个人讲课,但等你学完之后,你觉得你掌握了,这时候建议还是得去看一下书籍,看权威技术书籍也是每个程序员必经之路。

四、入门学习视频

我们在看视频学习的时候,不能光动眼动脑不动手,比较科学的学习方法是在理解之后运用它们,这时候练手项目就很适合了。

五、实战案例

光学理论是没用的,要学会跟着一起敲,要动手实操,才能将自己的所学运用到实际当中去,这时候可以搞点实战案例来学习。

六、面试资料

我们学习Python必然是为了找到高薪的工作,下面这些面试题是来自阿里、腾讯、字节等一线互联网大厂最新的面试资料,并且有阿里大佬给出了权威的解答,刷完这一套面试资料相信大家都能找到满意的工作。

小编13年上海交大毕业,曾经在小公司待过,也去过华为、OPPO等大厂,18年进入阿里一直到现在。

深知大多数初中级Python工程师,想要提升技能,往往是自己摸索成长或者是报班学习,但自己不成体系的自学效果低效又漫长,而且极易碰到天花板技术停滞不前!

因此收集整理了一份《2024年Python爬虫全套学习资料》送给大家,初衷也很简单,就是希望能够帮助到想自学提升又不知道该从何学起的朋友,同时减轻大家的负担。

由于文件比较大,这里只是将部分目录截图出来,每个节点里面都包含大厂面经、学习笔记、源码讲义、实战项目、讲解视频

如果你觉得这些内容对你有帮助,可以添加下面V无偿领取!(备注:python)
img

理论是没用的,要学会跟着一起敲,要动手实操,才能将自己的所学运用到实际当中去,这时候可以搞点实战案例来学习。

六、面试资料

我们学习Python必然是为了找到高薪的工作,下面这些面试题是来自阿里、腾讯、字节等一线互联网大厂最新的面试资料,并且有阿里大佬给出了权威的解答,刷完这一套面试资料相信大家都能找到满意的工作。

小编13年上海交大毕业,曾经在小公司待过,也去过华为、OPPO等大厂,18年进入阿里一直到现在。

深知大多数初中级Python工程师,想要提升技能,往往是自己摸索成长或者是报班学习,但自己不成体系的自学效果低效又漫长,而且极易碰到天花板技术停滞不前!

因此收集整理了一份《2024年Python爬虫全套学习资料》送给大家,初衷也很简单,就是希望能够帮助到想自学提升又不知道该从何学起的朋友,同时减轻大家的负担。

由于文件比较大,这里只是将部分目录截图出来,每个节点里面都包含大厂面经、学习笔记、源码讲义、实战项目、讲解视频

如果你觉得这些内容对你有帮助,可以添加下面V无偿领取!(备注:python)
[外链图片转存中…(img-pTIKF4OL-1711180277101)]

  • 18
    点赞
  • 25
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值