基于tensorflow的DCGAN

DCGAN,Deep Convolutional Generative Adversarial Network 。
参考文档

什么是生成对抗网络GANs?

生成对抗网络,实质上就是两个模型在一个对抗的过程中同时训练的过程。一个是生产者,也可以认为是艺术家,在欣赏完真实的图片之后,自己去创造图片。另一个是鉴别者,也可以认为是鉴赏家,鉴别出哪些是真实图片哪些是生产者学习创造的图片。
在这里插入图片描述
在这个训练的过程中,生产者产生的图片会越来越趋近真实图片。至到辨别者也辨别不出哪些是真实图片哪些是伪造图片位置。
在这里插入图片描述
想要学习更多关于GANs的知识,可以查看MIT相关链接](http://introtodeeplearning.com/)

相关库的安装

tensorflow == 2.0.0

!pip install -q tensorflow-gpu==2.0.0-alpha0

如果想生成gif格式,需安装imageio

!pip install -q imageio

安装过程图,注意如果在无网条件下需手动安装下面提示中的相关包。

Requirement already satisfied: imageio in /usr/local/lib/python3.6/dist-packages (2.4.1)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from imageio) (1.14.6)
Requirement already satisfied: pillow in /usr/local/lib/python3.6/dist-packages (from imageio) (4.0.0)
Requirement already satisfied: olefile in /usr/local/lib/python3.6/dist-packages (from pillow->imageio) (0.46)

相关包引入
import glob
import imageio
import matplotlib.pyplot as plt
import numpy as np
import os
import PIL
import tensorflow.keras.layers as layers
import time
import tensorflow as tf

from IPython import display
MNIST数据集下载
(train_images, train_labels), (_, _) = tf.keras.datasets.mnist.load_data()

Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step

train_images = train_images.reshape(train_images.shape[0], 28, 28, 1).astype('float32')
train_images = (train_images - 127.5) / 127.5 # 像素值归一化到 [-1, 1]

BUFFER_SIZE = 60000 # 数据集中样本数量
BATCH_SIZE = 256 # 训练中,每批样本数量

# Batch and shuffle the data
train_dataset = tf.data.Dataset.from_tensor_slices(train_images).shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
搭建模型

该模型的生成器与判别器是使用Keras Sequential API写成的。

生成器(The Generator)
def make_generator_model():
    model = tf.keras.Sequential() # 层堆叠模型
    model.add(layers.Dense(7*7*256, use_bias=False, input_shape=(100,)))#添加输入层,100维向量
    model.add(layers.BatchNormalization()) #添加批归一化
    model.add(layers.LeakyReLU()) #添加激活层

    model.add(layers.Reshape((7, 7, 256))) 
    assert model.output_shape == (None, 7, 7, 256) # Note: None is the batch size

    model.add(layers.Conv2DTranspose(128, (5, 5), strides=(1, 1), padding='same', use_bias=False))
    assert model.output_shape == (None, 7, 7, 128)
    model.add(layers.BatchNormalization())
    model.add(layers.LeakyReLU())

    model.add(layers.Conv2DTranspose(64, (5, 5), strides=(2, 2), padding='same', use_bias=False)) #添加反卷积层
    assert model.output_shape == (None, 14, 14, 64)
    model.add(layers.BatchNormalization())
    model.add(layers.LeakyReLU())

    model.add(layers.Conv2DTranspose(1, (5, 5), strides=(2, 2), padding='same', use_bias=False, activation='tanh'))
    assert model.output_shape == (None, 28, 28, 1)

    return model

参考:
反卷积:https://blog.csdn.net/qq_38906523/article/details/80520950

使用生成器创造一张图片

generator = make_generator_model()

noise = tf.random.normal([1, 100])#大小与输入input_shape吻合
generated_image = generator(noise, training=False)

plt.imshow(generated_image[0, :, :, 0], cmap='gray')

结果:
在这里插入图片描述

判别器(The discriminator)
def make_discriminator_model():
    model = tf.keras.Sequential()
    model.add(layers.Conv2D(64, (5, 5), strides=(2, 2), padding='same',
                                     input_shape=[28, 28, 1]))
    model.add(layers.LeakyReLU())
    model.add(layers.Dropout(0.3))

    model.add(layers.Conv2D(128, (5, 5), strides=(2, 2), padding='same'))
    model.add(layers.LeakyReLU())
    model.add(layers.Dropout(0.3))

    model.add(layers.Flatten())
    model.add(layers.Dense(1))

    return model

使用判别器判断图像真假

discriminator = make_discriminator_model()
decision = discriminator(generated_image)
print (decision)

结果

tf.Tensor([[-0.00070298]], shape=(1, 1), dtype=float32)

损失函数定义及优化
判别器损失计算
def discriminator_loss(real_output, fake_output):
    real_loss = cross_entropy(tf.ones_like(real_output), real_output)
    fake_loss = cross_entropy(tf.zeros_like(fake_output), fake_output)
    total_loss = real_loss + fake_loss
    return total_loss

将判别为真的与1数组比较,计算损失real_loss。将判别为假的与0数组进行比较,计算损失fake_loss。最后将两者损失相加。

生成器损失计算
def generator_loss(fake_output):
    return cross_entropy(tf.ones_like(fake_output), fake_output)

生成器损失量化了对判别器的欺骗能力。简单说,就是将判别器判别为假的图片再进行修改,使判别器将其判别为真。

判别器与生成器的损失函数
generator_optimizer = tf.keras.optimizers.Adam(1e-4)
discriminator_optimizer = tf.keras.optimizers.Adam(1e-4)
保存并复原模型
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(generator_optimizer=generator_optimizer,
                                 discriminator_optimizer=discriminator_optimizer,
                                 generator=generator,
                                 discriminator=discriminator)
模型训练

准备部分

EPOCHS = 50
noise_dim = 100
num_examples_to_generate = 16

# We will reuse this seed overtime (so it's easier)
# to visualize progress in the animated GIF)
seed = tf.random.normal([num_examples_to_generate, noise_dim])

train_step,首先将一个随机种子输入进生成器,生成一张图片。然后,判别器将对真实图片(数据集的图片)和伪图片(生成器生成的图片)。最后计算每个模型的损失,使用梯度更新生成器和判别器。

# Notice the use of `tf.function`
# This annotation causes the function to be "compiled".
@tf.function
def train_step(images):
	#BATCH_SIZE = 256,noise_dim = 100
    noise = tf.random.normal([BATCH_SIZE, noise_dim])#256*100

    with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:
      generated_images = generator(noise, training=True)#创造256张图片

      real_output = discriminator(images, training=True)#判别真实图片
      fake_output = discriminator(generated_images, training=True)#判别伪图片

      gen_loss = generator_loss(fake_output)#计算判别器将多少伪图片认为为伪图片
      disc_loss = discriminator_loss(real_output, fake_output)#计算判别器判别的损失值,即对真图片与伪图片的识别错误率越高,disc_loss约高

#计算生成器与判别器梯度
    gradients_of_generator = gen_tape.gradient(gen_loss, generator.trainable_variables)#计算生成器的梯度
    gradients_of_discriminator = disc_tape.gradient(disc_loss, discriminator.trainable_variables)#计算判别器的梯度

#优化生成器和判别器
    generator_optimizer.apply_gradients(zip(gradients_of_generator, generator.trainable_variables))#减小gen_loss,使生成器生成更多接近真图片的伪图片
    discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator, discriminator.trainable_variables))#减小disc_loss,增强判别器的判别能力

train,循环训练起来吧

def train(dataset, epochs):
  for epoch in range(epochs):
    start = time.time()

    for image_batch in dataset:
      train_step(image_batch)

    # Produce images for the GIF as we go
    display.clear_output(wait=True)
    generate_and_save_images(generator,
                             epoch + 1,
                             seed)

    # Save the model every 15 epochs
    if (epoch + 1) % 15 == 0:
      checkpoint.save(file_prefix = checkpoint_prefix)

    print ('Time for epoch {} is {} sec'.format(epoch + 1, time.time()-start))

  # Generate after the final epoch
  display.clear_output(wait=True)
  generate_and_save_images(generator,
                           epochs,
                           seed)

生成保存图片

def generate_and_save_images(model, epoch, test_input):
  # Notice `training` is set to False.
  # This is so all layers run in inference mode (batchnorm).
  predictions = model(test_input, training=False)

  fig = plt.figure(figsize=(4,4))

  for i in range(predictions.shape[0]):
      plt.subplot(4, 4, i+1)
      plt.imshow(predictions[i, :, :, 0] * 127.5 + 127.5, cmap='gray')
      plt.axis('off')

  plt.savefig('image_at_epoch_{:04d}.png'.format(epoch))
  plt.show()
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值