e4 = encoder_layer(e3,256,kernel_size=kernel_size)
d1 = decoder_layer(e4,e3,128,kernel_size=kernel_size)
d2 = decoder_layer(d1,e2,64,kernel_size=kernel_size)
d3 = decoder_layer(d2,e1,32,kernel_size=kernel_size)
outputs = keras.layers.Conv2DTranspose(channals,
kernel_size=kernel_size,
strides=1,
activation=‘sigmoid’,
padding=‘same’)(d3)
generator = keras.Model(inputs,outputs,name=name)
return generator
鉴别器
CycleGAN的鉴别器类似于原始GAN鉴别器。输入图像被下采样数次。 最后一层是Dense(1)层,它预测输入为真实图片的概率。除了不使用IN之外,每一层都类似于生成器的编码器层。但是,在大图像中,用一个概率将图像分类为真实或伪造会导致参数更新效率低下,并导致生成的图像质量较差。
解决方案是使用PatchGAN,该方法将图像划分为patch网格,并使用标量值网格来预测patch是真实图片概率。
PatchGAN并没有在CycleGAN中引入一种新型的GAN。 为了提高生成的图像质量,不是仅输出一个
鉴别结果,如果使用2 x 2 PatchGAN,有四个输出结果。损失函数没有变化。
def build_discriminator(input_shape,
kernel_size=3,
patchgan=True,
name=None):
“”"The discriminator is a 4-layer encoder that outputs either
a 1-dim or a n * n-dim patch of probility that input is real
Arguments:
input_shape (tuple): input shape
kernel_size (int): kernel size of decoder layers
patchgan (bool): whether the output is a patch or just a 1-dim
name (string): name assigned to discriminator model
Returns:
discriminator (model)
“”"
inputs = keras.layers.Input(shape=input_shape)
x = encoder_layer(inputs,
32,
kernel_size=kernel_size,
instance_normal=False)
x = encoder_layer(x,
64,
kernel_size=kernel_size,
instance_normal=False)
x = encoder_layer(x,
128,
kernel_size=kernel_size,
instance_normal=False)
x = encoder_layer(x,
256,
kernel_size=kernel_size,
instance_normal=False)
if patchgan:
x = keras.layers.LeakyReLU(alpha=0.2)(x)
outputs = keras.layers.Conv2D(1,
kernel_size=kernel_size,
strides=2,
padding=‘same’)(x)
else:
x = keras.layers.Flatten()(x)
x = keras.layers.Dense(1)(x)
outputs = keras.layers.Activation(‘linear’)(x)
discriminator = keras.Model(inputs,outputs,name=name)
return discriminator
CycleGAN
使用生成器和鉴别器构建CycleGAN。实例化了两个生成器g_source = F F F和g_target = G G G以及两个鉴别器d_source = D x D_x Dx和d_target = D y D_y Dy。前向循环是 x ′ = F ( G ( x ) ) x’=F(G(x)) x′=F(G(x))= reco_source = g_source(g_target(source_input))。反向循环是 y ′ = G ( F ( y ) ) y’=G(F(y)) y′=G(F(y))= reco_target = g_target(g_source(target_input))。
对抗模型的输入是源数据和目标数据,而输出是 D x D_x Dx和 D y D_y Dy的以及输入的重构 x ′ x’ x′和 y ′ y’ y′。由于灰度图像和彩色图像中通道数之间的差异,未使用标识网络。对于GAN和循环一致性损失,分别使用损失权重 λ 1 = 1.0 \lambda_1=1.0 λ1=1.0和 λ 2 = 10.0 \lambda_2=10.0 λ2=10.0。使用RMSprop作为鉴别器器的优化器,其学习率为2e-4,衰减率为6e-8。对抗网络的学习率和衰退率是鉴别器的一半。
def build_cyclegan(shapes,
source_name=‘source’,
target_name=‘target’,
kernel_size=3,</