GAN,CGAN,DCGAN在MIST数据集上的源码及训练结果对比

5 篇文章 0 订阅
3 篇文章 0 订阅

    开始自学人工神经网络的时候ANN,CNN,RNN都尝试过了,包括后续的计算机视觉-目标检测算法,听觉-声纹识别,强化学习-五子棋,都进行了编码实验,并且亲手做了一个实际项目《凭证印章签字检查系统》用于银行会计凭证批量检查入库,成功实施投入使用。

    但是对抗生成网络一直没有试过。近期找来了一些源码进行了尝试。写出一些心得体会。并把使用的代码和训练结果给大家分享一下,欢迎评论和指正。

先上图:

训练了三万次 一批32张GAN,生成器和判别器都是使用了全连接:

图片不是很清晰,有很多白点杂物,但基本上能看出数字。

CGAN只训练了两千次,CGAN和GAN用的网络是一样的,只是加入了标签,大致能看出来数字是按照标签的顺序生成的,所以就停止了训练,因为后面的DC_GAN才是最好玩的:

DCGAN网络上的教程要求,所有的激活用leaky-relu,所有的上采样用conv2dTranspose,所有的下采样用conv2d,步长为2,激活函数用tanh和sigmod:

第一次尝试,都用relu,上采样用unmaxpooling2D,下采样用maxpooling2D,果然没效果,生成的图片都一样而且看不出来是数字。分析原因,原因,可能是relu激活和pooling层会造成有些神经元反向传播的时候不可导,而训练生成器的时候卷积深度比较深,造成训练困难,而判别器又因为反复学习正负样本造成神经元死亡。

第二次尝试,生成器用relu和unmaxpooling2D,生成器没用激活函数。判别器用全连接。3万次训练效果如下:

效果还是挺好的!

第三次尝试,按照教程的激活函数和上下采样函数来,生成和判别器都用卷积网络,3万次训练效果如下:

效果不太理想,可能我只是学到了方法,没学到精髓。一些超参数出了问题。

第四次尝试:生成器按照教程的网络,判别器用全连接,训练三万次效果如下:

效果还可以。

总结:

1、对抗网络对激活函数、超参数的要求较高,实验容易失败,需要反复尝试积累经验。

2、对抗网络稳定性不太好。

3、有大神指导会进步很快,希望各路大神不吝指教。

实验源码如下。

GAN:

from __future__ import print_function, division

from tensorflow.keras.datasets import mnist
from tensorflow.keras.layers import Input, Dense, Reshape, Flatten, Dropout
from tensorflow.keras.layers import BatchNormalization, Activation, ZeroPadding2D
from tensorflow.keras.layers import LeakyReLU
from tensorflow.keras.layers import UpSampling2D, Conv2D
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.optimizers import Adam
import os
import matplotlib.pyplot as plt

import sys

import numpy as np


class GAN(object):
    def __init__(self):
        self.img_rows = 28
        self.img_cols = 28
        self.channels = 1
        self.img_shape = (self.img_rows, self.img_cols, self.channels)
        self.latent_dim = 100

        optimizer = Adam(0.0002, 0.5)

        # 构建和编译判别器
        self.discriminator = self.build_discriminator()
        self.discriminator.compile(loss='binary_crossentropy',
                                   optimizer=optimizer,
                                   metrics=['accuracy'])

        # 构建生成器
        self.generator = self.build_generator()

        # 生成器输入噪声,生成假图片
        z = Input(shape=(self.latent_dim,))
        img = self.generator(z)

        # 为了组合模型,只训练生成器
        self.discriminator.trainable = False

        # 判别器将生成的图片作为输入并确定有效性
        validity = self.discriminator(img)

        # The combined model  (stacked generator and discriminator)
        # 训练生成器骗过判别器
        self.combined = Model(z, validity)
        self.combined.compile(loss='binary_crossentropy', optimizer=optimizer)

    def build_generator(self):

        model = Sequential()

        model.add(Dense(256, input_dim=self.latent_dim))
        model.add(LeakyReLU(alpha=0.2))
        model.add(BatchNormalization(momentum=0.8))

        model.add(Dense(512))
        model.add(LeakyReLU(alpha=0.2))
        model.add(BatchNormalization(momentum=0.8))

        model.add(Dense(1024))
        model.add(LeakyReLU(alpha=0.2))
        model.add(BatchNormalization(momentum=0.8))

        # np.prod(self.img_shape)=28x28x1
        model.add(Dense(np.prod(self.img_shape), activation='tanh'))
        model.add(Reshape(self.img_shape))

        model.summary()

        noise = Input(shape=(self.latent_dim,))
        img = model(noise)

        # 输入噪音输出图片
        return Model(noise, img)

    def build_discriminator(self):

        model = Sequential()

        model.add(Flatten(input_shape=self.img_shape))
        model.add(Dense(512))
        model.add(LeakyReLU(alpha=0.2))
        model.add(Dense(256))
        model.add(LeakyReLU(alpha=0.2))
        model.add(Dense(1, activation='sigmoid'))
        model.summary()

        img = Input(shape=self.img_shape)
        validity = model(img)

        return Model(img, validity)

    def train(self, epochs, batch_size=128, sample_interval=50):
        dataPath = 'C:/Users/lenovo/Desktop/MinstGan/mnist.npz'
        # 加载数据集
        (x_train, _), (_, _) = mnist.load_data(path=dataPath)

        # 归一化到-1到1
        x_train = x_train / 127.5 - 1.
        print(x_train.shape)
        x_train = np.expand_dims(x_train, axis=3)
        print(x_train.shape)

        # Adversarial ground truths
        valid = np.ones((batch_size, 1))
        fake = np.zeros((batch_size, 1))

        for epoch in range(epochs):

            # ---------------------
            # 训练判别器
            # ---------------------

            # X_train.shape[0]Ϊ为数据集的数量,随机生成batch_size个数量的随机数,作为数据的索引
            idx = np.random.randint(0, x_train.shape[0], batch_size)

            # 从数据集随机挑选batch_size个数据,作为一个批次训练
            imgs = x_train[idx]

            # 噪声维度(batch_size,100)
            noise = np.random.normal(0, 1, (batch_size, self.latent_dim))

            # 由生成器根据噪音生成假的图片
            gen_imgs = self.generator.predict(noise)

            # 训练判别器,判别器希望真实的图片,打上标签1,假的图片打上标签0
            d_loss_real = self.discriminator.train_on_batch(imgs, valid)
            d_loss_fake = self.discriminator.train_on_batch(gen_imgs, fake)
            d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)

            # ---------------------
            #  训练生成器
            # ---------------------

            noise = np.random.normal(0, 1, (batch_size, self.latent_dim))

            # Train the generator (to have the discriminator label samples as valid)
            g_loss = self.combined.train_on_batch(noise, valid)

            # 打印loss
            print("%d [D loss: %f, acc.: %.2f%%] [G loss: %f]" % (epoch, d_loss[0], 100 * d_loss[1], g_loss))

            # 每sample_interval个epoch保存一次生成图片
            if epoch % sample_interval == 0:
                self.sample_images(epoch)
                if not os.path.exists("keras_model"):
                    os.makedirs("keras_model")
                self.generator.save_weights("keras_model/G_model%d.hdf5" % epoch, True)
                self.discriminator.save_weights("keras_model/D_model%d.hdf5" % epoch, True)

    def sample_images(self, epoch):
        r, c = 5, 5
        # 重新生成一批噪音,维度为(25,100)
        noise = np.random.normal(0, 1, (r * c, self.latent_dim))
        gen_imgs = self.generator.predict(noise)

        # 将生成的图片重新规整0-1之间
        gen_imgs = 0.5 * gen_imgs + 0.5

        fig, axs = plt.subplots(r, c)
        cnt = 0
        for i in range(r):
            for j in range(c):
                axs[i, j].imshow(gen_imgs[cnt, :, :, 0], cmap='gray')
                axs[i, j].axis('off')
                cnt += 1
        if not os.path.exists("keras_imgs"):
            os.makedirs("keras_imgs")
        fig.savefig("keras_imgs/%d.png" % epoch)
        plt.close()

    def test(self, gen_nums=100):
        self.generator.load_weights("keras_model/G_model15000.hdf5", by_name=True)
        self.discriminator.load_weights("keras_model/D_model15000.hdf5", by_name=True)
        noise = np.random.normal(0, 1, (gen_nums, self.latent_dim))
        gen = self.generator.predict(noise)
        print(gen.shape)
        # 重整图片到0-1
        gen = 0.5 * gen + 0.5
        for i in range(0, len(gen)):
            plt.figure(figsize=(128, 128), dpi=1)
            plt.imshow(gen[i, :, :, 0], cmap="gray")
            plt.axis("off")
            if not os.path.exists("keras_gen"):
                os.makedirs("keras_gen")
            plt.savefig("keras_gen" + os.sep + str(i) + '.jpg', dpi=1)
            plt.close()


if __name__ == '__main__':
    gan = GAN()
    gan.train(epochs=30000, batch_size=32, sample_interval=1000)
    gan.test()

 

CGAN:

from tensorflow.keras.datasets import mnist
import tensorflow.keras.backend as K
from tensorflow.keras.layers import Input, Dense, Reshape, Flatten, Dropout
from tensorflow.keras.layers import BatchNormalization, Activation, ZeroPadding2D
from tensorflow.keras.layers import LeakyReLU,Embedding
from tensorflow.keras.layers import UpSampling2D, Conv2D
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.optimizers import Adam
import os
import matplotlib.pyplot as plt

import sys

import numpy as np


class GAN(object):
    def __init__(self):
        self.img_rows = 28
        self.img_cols = 28
        self.channels = 1
        self.img_shape = (self.img_rows, self.img_cols, self.channels)
        self.latent_dim = 100

        optimizer = Adam(0.0002, 0.5)

        # 构建和编译判别器
        self.discriminator = self.build_discriminator()
        self.discriminator.compile(loss='binary_crossentropy',
                                   optimizer=optimizer,
                                   metrics=['accuracy'])
        # self.discriminator.compile(loss='sparse_categorical_crossentropy',
        #                            optimizer=optimizer,
        #                            metrics=['accuracy'])

        # 构建生成器
        self.generator = self.build_generator()

        # 生成器输入噪声,生成假图片
        noise = Input(shape=(self.latent_dim,))
        label = Input(shape=(1,))
        img = self.generator([noise, label])

        # 为了组合模型,只训练生成器
        self.discriminator.trainable = False

        # 判别器将生成的图片作为输入并确定有效性
        validity = self.discriminator([img, label])

        # The combined model  (stacked generator and discriminator)
        # 训练生成器骗过判别器
        self.combined = Model([noise, label], validity)
        self.combined.compile(loss='binary_crossentropy', optimizer=optimizer)

    def build_generator(self):

        model = Sequential()

        model.add(Dense(256, input_dim=self.latent_dim + 10))
        model.add(LeakyReLU(alpha=0.2))
        model.add(BatchNormalization(momentum=0.8))

        model.add(Dense(512))
        model.add(LeakyReLU(alpha=0.2))
        model.add(BatchNormalization(momentum=0.8))

        model.add(Dense(1024))
        model.add(LeakyReLU(alpha=0.2))
        model.add(BatchNormalization(momentum=0.8))

        # np.prod(self.img_shape)=28x28x1
        model.add(Dense(np.prod(self.img_shape), activation='tanh'))
        model.add(Reshape(self.img_shape))

        model.summary()

        noise = Input(shape=(self.latent_dim,))
        label = Input(shape=(1,), dtype='int32')
        label_embedding = Flatten()(Embedding(10, 10)(label))
        model_input = K.concatenate([noise, label_embedding], axis=1)
        img = model(model_input)

        # 输入噪音输出图片
        return Model([noise, label], img)

    def build_discriminator(self):

        model = Sequential()

        model.add(Flatten(input_shape=(28,28,2)))
        model.add(Dense(512))
        model.add(LeakyReLU(alpha=0.2))
        model.add(Dense(256))
        model.add(LeakyReLU(alpha=0.2))
        model.add(Dense(1, activation='sigmoid'))
        model.summary()
        img = Input(shape=self.img_shape)  # 输入 (28,28,1)
        label = Input(shape=(1,), dtype='int32')
        label_embedding = Flatten()(Embedding(10, np.prod(self.img_shape))(label))
        flat_img = Flatten()(img)
        model_input = K.concatenate([flat_img, label_embedding], axis = -1)
        validity = model(model_input)

        return Model([img, label], validity)

    def train(self, epochs, batch_size=128, sample_interval=50):
        dataPath = 'C:/Users/lenovo/Desktop/MinstGan/mnist.npz'
        # 加载数据集
        (x_train, y_train), (_, _) = mnist.load_data(path=dataPath)

        # 归一化到-1到1
        x_train = x_train / 127.5 - 1.
        print(x_train.shape)
        x_train = np.expand_dims(x_train, axis=3)
        print(x_train.shape)
        y_train = np.expand_dims(y_train, axis=1)
        print(y_train.shape)#60000,1

        # Adversarial ground truths
        valid = np.ones((batch_size, 1))
        fake = np.zeros((batch_size, 1))

        for epoch in range(epochs):

            # ---------------------
            # 训练判别器
            # ---------------------

            # X_train.shape[0]Ϊ为数据集的数量,随机生成batch_size个数量的随机数,作为数据的索引
            idx = np.random.randint(0, x_train.shape[0], batch_size)

            # 从数据集随机挑选batch_size个数据,作为一个批次训练
            imgs = x_train[idx]
            labels = y_train[idx]

            # 噪声维度(batch_size,100)
            noise = np.random.normal(0, 1, (batch_size, self.latent_dim))

            # 由生成器根据噪音生成假的图片
            gen_imgs = self.generator.predict([noise, labels])
            # 训练判别器,判别器希望真实的图片,打上标签1,假的图片打上标签0
            d_loss_real = self.discriminator.train_on_batch([imgs, labels], valid)
            d_loss_fake2 = self.discriminator.train_on_batch([gen_imgs, labels], fake)
            d_loss = d_loss_real + d_loss_fake2

            # ---------------------
            #  训练生成器
            # ---------------------
            # Train the generator (to have the discriminator label samples as valid)
            g_loss = self.combined.train_on_batch([noise, labels], valid)


            # 每sample_interval个epoch保存一次生成图片
            if epoch % sample_interval == 0:
                # 打印loss
                print("%d [D loss: %f, acc.: %.2f%%] [G loss: %f]" % (epoch, d_loss[0], 100 * d_loss[1], g_loss))
                self.sample_images(epoch)
                if not os.path.exists("keras_model"):
                    os.makedirs("keras_model")
                self.generator.save_weights("keras_model/G_model%d.hdf5" % epoch, True)
                self.discriminator.save_weights("keras_model/D_model%d.hdf5" % epoch, True)

    def sample_images(self, epoch):
        r, c = 4, 5
        # 重新生成一批噪音,维度为(25,100)
        noise = np.random.normal(0, 1, (r * c, self.latent_dim))
        sampled_labels = np.concatenate([np.arange(0, 10).reshape(-1, 1), np.arange(0, 10).reshape(-1, 1)])
        gen_imgs = self.generator.predict([noise, sampled_labels])

        # 将生成的图片重新规整0-1之间
        gen_imgs = 0.5 * gen_imgs + 0.5

        fig, axs = plt.subplots(r, c)
        cnt = 0
        for i in range(r):
            for j in range(c):
                axs[i, j].imshow(gen_imgs[cnt, :, :, 0], cmap='gray')
                axs[i, j].axis('off')
                cnt += 1
        if not os.path.exists("keras_imgs"):
            os.makedirs("keras_imgs")
        fig.savefig("keras_imgs/%d.png" % epoch)
        plt.close()

    def retore(self, epoch):
        self.generator.load_weights("keras_model/G_model%d.hdf5" % epoch)
        self.discriminator.load_weights("keras_model/D_model%d.hdf5" % epoch)

if __name__ == '__main__':
    gan = GAN()
    # gan.retore(11000)
    gan.train(epochs=30000, batch_size=32, sample_interval=500)

 

DCGAN1:

from tensorflow.keras.datasets import mnist
import tensorflow.keras.backend as K
from tensorflow.keras.layers import Input, Dense, Reshape, Flatten, Dropout
from tensorflow.keras.layers import BatchNormalization, Activation, ZeroPadding2D
from tensorflow.keras.layers import LeakyReLU,Embedding
from tensorflow.keras.layers import UpSampling2D, Conv2D
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.optimizers import Adam
import os
import matplotlib.pyplot as plt

import sys

import numpy as np


class GAN(object):
    def __init__(self):
        self.img_rows = 28
        self.img_cols = 28
        self.channels = 1
        self.img_shape = (self.img_rows, self.img_cols, self.channels)
        self.latent_dim = 100

        optimizer = Adam(0.0002, 0.5)

        # 构建和编译判别器
        self.discriminator = self.build_discriminator()
        self.discriminator.compile(loss='binary_crossentropy',
                                   optimizer=optimizer,
                                   metrics=['accuracy'])
        # self.discriminator.compile(loss='sparse_categorical_crossentropy',
        #                            optimizer=optimizer,
        #                            metrics=['accuracy'])

        # 构建生成器
        self.generator = self.build_generator()

        # 生成器输入噪声,生成假图片
        noise = Input(shape=(self.latent_dim,))
        label = Input(shape=(1,))
        img = self.generator([noise, label])

        # 为了组合模型,只训练生成器
        self.discriminator.trainable = False

        # 判别器将生成的图片作为输入并确定有效性
        validity = self.discriminator([img, label])

        # The combined model  (stacked generator and discriminator)
        # 训练生成器骗过判别器
        self.combined = Model([noise, label], validity)
        self.combined.compile(loss='binary_crossentropy', optimizer=optimizer)

    def build_generator(self):

        model = Sequential()

        model.add(Dense(128, activation='relu', input_dim=110))
        model.add(Dense(196, activation='relu'))
        model.add(Reshape((7, 7, 4)))
        model.add(Conv2D(32, (3, 3), activation='relu', padding='same'))
        model.add(UpSampling2D(size=(2,2)))
        model.add(Conv2D(32, (3, 3), activation='relu', padding='same'))
        model.add(UpSampling2D(size=(2,2)))
        model.add(Conv2D(16, (3, 3), activation='relu', padding='same'))
        model.add(Conv2D(1, (3, 3), padding='same'))

        model.summary()

        noise = Input(shape=(self.latent_dim,))
        label = Input(shape=(1,), dtype='int32')
        label_embedding = Flatten()(Embedding(10, 10)(label))
        model_input = K.concatenate([noise, label_embedding], axis=1)
        img = model(model_input)

        # 输入噪音输出图片
        return Model([noise, label], img)

    def build_discriminator(self):

        model = Sequential()

        model.add(Flatten(input_shape=(28,28,2)))
        model.add(Dense(512))
        model.add(LeakyReLU(alpha=0.2))
        model.add(Dense(256))
        model.add(LeakyReLU(alpha=0.2))
        model.add(Dense(1, activation='sigmoid'))
        model.summary()
        img = Input(shape=self.img_shape)  # 输入 (28,28,1)
        label = Input(shape=(1,), dtype='int32')
        label_embedding = Flatten()(Embedding(10, np.prod(self.img_shape))(label))
        flat_img = Flatten()(img)
        model_input = K.concatenate([flat_img, label_embedding], axis = -1)
        validity = model(model_input)

        return Model([img, label], validity)

    def train(self, epochs, batch_size=128, sample_interval=50):
        dataPath = 'C:/Users/lenovo/Desktop/MinstGan/mnist.npz'
        # 加载数据集
        (x_train, y_train), (_, _) = mnist.load_data(path=dataPath)

        # 归一化到-1到1
        x_train = x_train / 127.5 - 1.
        print(x_train.shape)
        x_train = np.expand_dims(x_train, axis=3)
        print(x_train.shape)
        y_train = np.expand_dims(y_train, axis=1)
        print(y_train.shape)#60000,1

        # Adversarial ground truths
        valid = np.ones((batch_size, 1))
        fake = np.zeros((batch_size, 1))

        for epoch in range(epochs):

            # ---------------------
            # 训练判别器
            # ---------------------

            # X_train.shape[0]Ϊ为数据集的数量,随机生成batch_size个数量的随机数,作为数据的索引
            idx = np.random.randint(0, x_train.shape[0], batch_size)

            # 从数据集随机挑选batch_size个数据,作为一个批次训练
            imgs = x_train[idx]
            labels = y_train[idx]

            # 噪声维度(batch_size,100)
            noise = np.random.normal(0, 1, (batch_size, self.latent_dim))

            # 由生成器根据噪音生成假的图片
            gen_imgs = self.generator.predict([noise, labels])
            # 训练判别器,判别器希望真实的图片,打上标签1,假的图片打上标签0
            d_loss_real = self.discriminator.train_on_batch([imgs, labels], valid)
            d_loss_fake2 = self.discriminator.train_on_batch([gen_imgs, labels], fake)
            d_loss = d_loss_real + d_loss_fake2

            # ---------------------
            #  训练生成器
            # ---------------------
            # Train the generator (to have the discriminator label samples as valid)
            g_loss = self.combined.train_on_batch([noise, labels], valid)

            # 打印loss
            print("%d [D loss: %f, acc.: %.2f%%] [G loss: %f]" % (epoch, d_loss[0], 100 * d_loss[1], g_loss))

            # 每sample_interval个epoch保存一次生成图片
            if epoch % sample_interval == 0:
                self.sample_images(epoch)
                if not os.path.exists("keras_model"):
                    os.makedirs("keras_model")
                self.generator.save_weights("keras_model/G_model%d.hdf5" % epoch, True)
                self.discriminator.save_weights("keras_model/D_model%d.hdf5" % epoch, True)

    def sample_images(self, epoch):
        r, c = 4, 5
        # 重新生成一批噪音,维度为(25,100)
        noise = np.random.normal(0, 1, (r * c, self.latent_dim))
        sampled_labels = np.concatenate([np.arange(0, 10).reshape(-1, 1), np.arange(0, 10).reshape(-1, 1)])
        gen_imgs = self.generator.predict([noise, sampled_labels])

        # 将生成的图片重新规整0-1之间
        gen_imgs = 0.5 * gen_imgs + 0.5

        fig, axs = plt.subplots(r, c)
        cnt = 0
        for i in range(r):
            for j in range(c):
                axs[i, j].imshow(gen_imgs[cnt, :, :, 0], cmap='gray')
                axs[i, j].axis('off')
                cnt += 1
        if not os.path.exists("keras_imgs"):
            os.makedirs("keras_imgs")
        fig.savefig("keras_imgs/%d.png" % epoch)
        plt.close()

    def retore(self, epoch):
        self.generator.load_weights("keras_model/G_model%d.hdf5" % epoch)
        self.discriminator.load_weights("keras_model/D_model%d.hdf5" % epoch)

if __name__ == '__main__':
    gan = GAN()
    # gan.retore(500)
    gan.train(epochs=30000, batch_size=32, sample_interval=300)

 

DCGAN2:

from tensorflow.keras.datasets import mnist
import tensorflow.keras.backend as K
from tensorflow.keras.layers import Input, Dense, Reshape, Flatten, Dropout
from tensorflow.keras.layers import BatchNormalization, Activation, ZeroPadding2D
from tensorflow.keras.layers import LeakyReLU,Embedding
from tensorflow.keras.layers import UpSampling2D, Conv2D, MaxPooling2D, Conv2DTranspose
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.optimizers import Adam
import os
import matplotlib.pyplot as plt

import sys

import numpy as np


class GAN(object):
    def __init__(self):
        self.img_rows = 28
        self.img_cols = 28
        self.channels = 1
        self.img_shape = (self.img_rows, self.img_cols, self.channels)
        self.latent_dim = 100
        self.dropout = 0.2
        optimizer = Adam(0.0001, 0.9)

        # 构建和编译判别器
        self.discriminator = self.build_discriminator()
        self.discriminator.compile(loss='binary_crossentropy',
                                   optimizer=optimizer,
                                   metrics=['accuracy'])
        # self.discriminator.compile(loss='sparse_categorical_crossentropy',
        #                            optimizer=optimizer,
        #                            metrics=['accuracy'])

        # 构建生成器
        self.generator = self.build_generator()

        # 生成器输入噪声,生成假图片
        noise = Input(shape=(self.latent_dim,))
        label = Input(shape=(1,))
        img = self.generator([noise, label])

        # 为了组合模型,只训练生成器
        self.discriminator.trainable = False

        # 判别器将生成的图片作为输入并确定有效性
        validity = self.discriminator([img, label])

        # The combined model  (stacked generator and discriminator)
        # 训练生成器骗过判别器
        self.combined = Model([noise, label], validity)
        self.combined.compile(loss='binary_crossentropy', optimizer=optimizer,
                                   metrics=['accuracy'])

    def build_generator(self):

        model = Sequential()

        model.add(Dense(64 * 7 * 7,  input_dim=110))
        model.add(BatchNormalization(momentum=0.9))
        model.add(LeakyReLU(alpha=0.2))
        model.add(Reshape((7, 7, 64)))
        model.add(Conv2DTranspose(64, kernel_size = 3, strides = 2, padding='same'))
        model.add(BatchNormalization(momentum=0.9))
        model.add(LeakyReLU(alpha=0.2))
        model.add(Conv2DTranspose(32,  kernel_size = 3, strides = 2, padding='same'))
        model.add(BatchNormalization(momentum=0.9))
        model.add(LeakyReLU(alpha=0.2))
        model.add(Conv2D(16,  kernel_size = 3, padding='same'))
        model.add(BatchNormalization(momentum=0.9))
        model.add(LeakyReLU(alpha=0.2))
        model.add(Conv2D(1, (3, 3), padding='same', activation='tanh'))

        # model.add(Dense(128, activation='relu', input_dim=110))
        # model.add(Dense(16 * 7 * 7, activation='relu'))
        # model.add(Reshape((7, 7, 16)))
        # model.add(Conv2D(32, (3, 3), activation='relu', padding='same'))
        # model.add(UpSampling2D(size=(2,2)))
        # model.add(Conv2D(32, (3, 3), activation='relu', padding='same'))
        # model.add(UpSampling2D(size=(2,2)))
        # model.add(Conv2D(16, (3, 3), activation='relu', padding='same'))
        # model.add(Conv2D(1, (3, 3), padding='same'))

        model.summary()

        noise = Input(shape=(self.latent_dim,))
        label = Input(shape=(1,), dtype='int32')
        label_embedding = Flatten()(Embedding(10, 10)(label))
        model_input = K.concatenate([noise, label_embedding], axis=1)
        img = model(model_input)

        # 输入噪音输出图片
        return Model([noise, label], img)

    def build_discriminator(self):

        model = Sequential()

        model.add(Conv2D(64, (3, 3), strides=(2,2), input_shape = (28, 28, 2)))
        model.add(LeakyReLU(alpha=0.2))#判别器不能用relu  征造反复标记 成神经元死亡
        model.add(Dropout(self.dropout))
        model.add(Conv2D(128, (3, 3), strides=(2,2)))
        model.add(LeakyReLU(alpha=0.2))
        model.add(Dropout(self.dropout))
        model.add(Conv2D(128, (3, 3), strides=(2,2)))
        model.add(LeakyReLU(alpha=0.2))
        model.add(Dropout(self.dropout))
        model.add(Flatten())
        model.add(Dense(64))
        model.add(LeakyReLU(alpha=0.2))
        model.add(Dense(1, activation='sigmoid'))

        model.summary()
        img = Input(shape=self.img_shape)  # 输入 (28,28,1)
        label = Input(shape=(1,), dtype='int32')
        label_embedding = Flatten()(Embedding(10, np.prod(self.img_shape))(label))
        # flat_img = Flatten()(img)
        label_embedding = Reshape(self.img_shape)(label_embedding)
        model_input = K.concatenate([img, label_embedding], axis = -1)
        print(model_input.shape)
        validity = model(model_input)

        return Model([img, label], validity)

    def train(self, epochs, batch_size=128, sample_interval=50):
        dataPath = 'C:/Users/lenovo/Desktop/MinstGan/mnist.npz'
        # 加载数据集
        (x_train, y_train), (_, _) = mnist.load_data(path=dataPath)

        # 归一化到-1到1
        x_train = x_train / 127.5 - 1.
        print(x_train.shape)
        x_train = np.expand_dims(x_train, axis=3)
        print(x_train.shape)
        y_train = np.expand_dims(y_train, axis=1)
        print(y_train.shape)#60000,1

        # Adversarial ground truths
        valid = np.ones((batch_size, 1))
        fake = np.zeros((batch_size, 1))

        for epoch in range(epochs):

            # ---------------------
            # 训练判别器
            # ---------------------

            # X_train.shape[0]Ϊ为数据集的数量,随机生成batch_size个数量的随机数,作为数据的索引
            idx = np.random.randint(0, x_train.shape[0], batch_size)

            # 从数据集随机挑选batch_size个数据,作为一个批次训练
            imgs = x_train[idx]
            labels = y_train[idx]

            # 噪声维度(batch_size,100)
            noise = np.random.normal(0, 1, (batch_size, self.latent_dim))

            # 由生成器根据噪音生成假的图片
            gen_imgs = self.generator.predict([noise, labels])
            # 训练判别器,判别器希望真实的图片,打上标签1,假的图片打上标签0
            d_loss_real = self.discriminator.train_on_batch([imgs, labels], valid)
            d_loss_fake = self.discriminator.train_on_batch([gen_imgs, labels], fake)

            # ---------------------
            #  训练生成器
            # ---------------------
            # Train the generator (to have the discriminator label samples as valid)
            g_loss = self.combined.train_on_batch([noise, labels], valid)

            # 打印loss 真图识别率 假图识别率  诱骗成功率
            print("%d [D realloss: %f, realacc: %.2f%%, fakeloss: %f, fakeacc: %.2f%%] [G loss: %f, acc = %.2f%%]" %
                  (epoch, d_loss_real[0], 100 * d_loss_real[1], d_loss_fake[0], 100 * d_loss_fake[1], g_loss[0], 100 * g_loss[1]))

            # 每sample_interval个epoch保存一次生成图片
            if epoch % sample_interval == 0:
                self.sample_images(epoch)
                if not os.path.exists("keras_model"):
                    os.makedirs("keras_model")
                self.generator.save_weights("keras_model/G_model%d.hdf5" % epoch, True)
                self.discriminator.save_weights("keras_model/D_model%d.hdf5" % epoch, True)

    def sample_images(self, epoch):
        r, c = 4, 5
        # 重新生成一批噪音,维度为(25,100)
        noise = np.random.normal(0, 1, (r * c, self.latent_dim))
        sampled_labels = np.concatenate([np.arange(0, 10).reshape(-1, 1), np.arange(0, 10).reshape(-1, 1)])
        gen_imgs = self.generator.predict([noise, sampled_labels])

        # 将生成的图片重新规整0-1之间
        gen_imgs = 0.5 * gen_imgs + 0.5

        fig, axs = plt.subplots(r, c)
        cnt = 0
        for i in range(r):
            for j in range(c):
                axs[i, j].imshow(gen_imgs[cnt, :, :, 0], cmap='gray')
                axs[i, j].axis('off')
                cnt += 1
        if not os.path.exists("keras_imgs"):
            os.makedirs("keras_imgs")
        fig.savefig("keras_imgs/%d.png" % epoch)
        plt.close()

    def retore(self, epoch):
        self.generator.load_weights("keras_model/G_model%d.hdf5" % epoch)
        self.discriminator.load_weights("keras_model/D_model%d.hdf5" % epoch)

if __name__ == '__main__':
    gan = GAN()
    # gan.retore(4500)
    gan.train(epochs=30000, batch_size=32, sample_interval=300)

 

DC_GAN3:

from tensorflow.keras.datasets import mnist
import tensorflow.keras.backend as K
from tensorflow.keras.layers import Input, Dense, Reshape, Flatten, Dropout
from tensorflow.keras.layers import BatchNormalization, Activation, ZeroPadding2D
from tensorflow.keras.layers import LeakyReLU,Embedding,Conv2DTranspose
from tensorflow.keras.layers import UpSampling2D, Conv2D
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.optimizers import Adam
import os
import matplotlib.pyplot as plt

import sys

import numpy as np


class GAN(object):
    def __init__(self):
        self.img_rows = 28
        self.img_cols = 28
        self.channels = 1
        self.img_shape = (self.img_rows, self.img_cols, self.channels)
        self.latent_dim = 100

        optimizer = Adam(0.0002, 0.5)

        # 构建和编译判别器
        self.discriminator = self.build_discriminator()
        self.discriminator.compile(loss='binary_crossentropy',
                                   optimizer=optimizer,
                                   metrics=['accuracy'])
        # self.discriminator.compile(loss='sparse_categorical_crossentropy',
        #                            optimizer=optimizer,
        #                            metrics=['accuracy'])

        # 构建生成器
        self.generator = self.build_generator()

        # 生成器输入噪声,生成假图片
        noise = Input(shape=(self.latent_dim,))
        label = Input(shape=(1,))
        img = self.generator([noise, label])

        # 为了组合模型,只训练生成器
        self.discriminator.trainable = False

        # 判别器将生成的图片作为输入并确定有效性
        validity = self.discriminator([img, label])

        # The combined model  (stacked generator and discriminator)
        # 训练生成器骗过判别器
        self.combined = Model([noise, label], validity)
        self.combined.compile(loss='binary_crossentropy', optimizer=optimizer)

    def build_generator(self):

        model = Sequential()

        # model.add(Dense(128, activation='relu', input_dim=110))
        # model.add(Dense(196, activation='relu'))
        # model.add(Reshape((7, 7, 4)))
        # model.add(Conv2D(32, (3, 3), activation='relu', padding='same'))
        # model.add(UpSampling2D(size=(2,2)))
        # model.add(Conv2D(32, (3, 3), activation='relu', padding='same'))
        # model.add(UpSampling2D(size=(2,2)))
        # model.add(Conv2D(16, (3, 3), activation='relu', padding='same'))
        # model.add(Conv2D(1, (3, 3), padding='same'))

        model.add(Dense(64 * 7 * 7,  input_dim=110))
        model.add(BatchNormalization(momentum=0.9))
        model.add(LeakyReLU(alpha=0.2))
        model.add(Reshape((7, 7, 64)))
        model.add(Conv2DTranspose(64, kernel_size = 3, strides = 2, padding='same'))
        model.add(BatchNormalization(momentum=0.9))
        model.add(LeakyReLU(alpha=0.2))
        model.add(Conv2DTranspose(32,  kernel_size = 3, strides = 2, padding='same'))
        model.add(BatchNormalization(momentum=0.9))
        model.add(LeakyReLU(alpha=0.2))
        model.add(Conv2D(16,  kernel_size = 3, padding='same'))
        model.add(BatchNormalization(momentum=0.9))
        model.add(LeakyReLU(alpha=0.2))
        model.add(Conv2D(1, (3, 3), padding='same', activation='tanh'))

        model.summary()

        noise = Input(shape=(self.latent_dim,))
        label = Input(shape=(1,), dtype='int32')
        label_embedding = Flatten()(Embedding(10, 10)(label))
        model_input = K.concatenate([noise, label_embedding], axis=1)
        img = model(model_input)

        # 输入噪音输出图片
        return Model([noise, label], img)

    def build_discriminator(self):

        model = Sequential()

        model.add(Flatten(input_shape=(28,28,2)))
        model.add(Dense(512))
        model.add(LeakyReLU(alpha=0.2))
        model.add(Dense(256))
        model.add(LeakyReLU(alpha=0.2))
        model.add(Dense(1, activation='sigmoid'))
        model.summary()
        img = Input(shape=self.img_shape)  # 输入 (28,28,1)
        label = Input(shape=(1,), dtype='int32')
        label_embedding = Flatten()(Embedding(10, np.prod(self.img_shape))(label))
        flat_img = Flatten()(img)
        model_input = K.concatenate([flat_img, label_embedding], axis = -1)
        validity = model(model_input)

        return Model([img, label], validity)

    def train(self, epochs, batch_size=128, sample_interval=50):
        dataPath = 'C:/Users/lenovo/Desktop/MinstGan/mnist.npz'
        # 加载数据集
        (x_train, y_train), (_, _) = mnist.load_data(path=dataPath)

        # 归一化到-1到1
        x_train = x_train / 127.5 - 1.
        print(x_train.shape)
        x_train = np.expand_dims(x_train, axis=3)
        print(x_train.shape)
        y_train = np.expand_dims(y_train, axis=1)
        print(y_train.shape)#60000,1

        # Adversarial ground truths
        valid = np.ones((batch_size, 1))
        fake = np.zeros((batch_size, 1))

        for epoch in range(epochs):

            # ---------------------
            # 训练判别器
            # ---------------------

            # X_train.shape[0]Ϊ为数据集的数量,随机生成batch_size个数量的随机数,作为数据的索引
            idx = np.random.randint(0, x_train.shape[0], batch_size)

            # 从数据集随机挑选batch_size个数据,作为一个批次训练
            imgs = x_train[idx]
            labels = y_train[idx]

            # 噪声维度(batch_size,100)
            noise = np.random.normal(0, 1, (batch_size, self.latent_dim))

            # 由生成器根据噪音生成假的图片
            gen_imgs = self.generator.predict([noise, labels])
            # 训练判别器,判别器希望真实的图片,打上标签1,假的图片打上标签0
            d_loss_real = self.discriminator.train_on_batch([imgs, labels], valid)
            d_loss_fake2 = self.discriminator.train_on_batch([gen_imgs, labels], fake)
            d_loss = d_loss_real + d_loss_fake2

            # ---------------------
            #  训练生成器
            # ---------------------
            # Train the generator (to have the discriminator label samples as valid)
            g_loss = self.combined.train_on_batch([noise, labels], valid)

            # 打印loss
            print("%d [D loss: %f, acc.: %.2f%%] [G loss: %f]" % (epoch, d_loss[0], 100 * d_loss[1], g_loss))

            # 每sample_interval个epoch保存一次生成图片
            if epoch % sample_interval == 0:
                self.sample_images(epoch)
                if not os.path.exists("keras_model"):
                    os.makedirs("keras_model")
                self.generator.save_weights("keras_model/G_model%d.hdf5" % epoch, True)
                self.discriminator.save_weights("keras_model/D_model%d.hdf5" % epoch, True)

    def sample_images(self, epoch):
        r, c = 4, 5
        # 重新生成一批噪音,维度为(25,100)
        noise = np.random.normal(0, 1, (r * c, self.latent_dim))
        sampled_labels = np.concatenate([np.arange(0, 10).reshape(-1, 1), np.arange(0, 10).reshape(-1, 1)])
        gen_imgs = self.generator.predict([noise, sampled_labels])

        # 将生成的图片重新规整0-1之间
        gen_imgs = 0.5 * gen_imgs + 0.5

        fig, axs = plt.subplots(r, c)
        cnt = 0
        for i in range(r):
            for j in range(c):
                axs[i, j].imshow(gen_imgs[cnt, :, :, 0], cmap='gray')
                axs[i, j].axis('off')
                cnt += 1
        if not os.path.exists("keras_imgs"):
            os.makedirs("keras_imgs")
        fig.savefig("keras_imgs/%d.png" % epoch)
        plt.close()

    def retore(self, epoch):
        self.generator.load_weights("keras_model/G_model%d.hdf5" % epoch)
        self.discriminator.load_weights("keras_model/D_model%d.hdf5" % epoch)

if __name__ == '__main__':
    gan = GAN()
    # gan.retore(500)
    gan.train(epochs=30000, batch_size=32, sample_interval=300)

 

  • 2
    点赞
  • 23
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值