【深度神经网络】四、Mini-VGG实现CIFAR10数据集分类

概要

本篇博客主要讲解了CIFAR10数据集的预处理,Mini-VGG的tensorflow和keras实现,并实现了CIFAR数据集的分类。

VGG16网络架构论文讲解请详见:【深度神经网络】三、VGG网络架构详解

整个项目的github地址为:Mini-VGG-CIFAR10 ,如果喜欢集的点个Star


一、 Cifar10数据集说明

为了实现VGG16网络对CIFAR10数据集的分类,我们首先得对CIFAR10进行一个详细介绍

Cifar10数据集共有60000张彩色图像,这些图像是32*32,分为10个类,每类6000张图。其中,有50000张用于训练,构成了5个训练批,每一批10000张图;另外10000用于测试,单独构成一批。测试批的数据里,取自10类中的每一类,每一类随机取1000张。抽剩下的就随机排列组成了训练批。注意一个训练批中的各类图像并不一定数量相同,总的来看训练批,每一类都有5000张图。
下面这幅图就是列举了10各类,每一类展示了随机的10张图片:
在这里插入图片描述
该数据是由以下三个人收集而来:Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton。第一位是AlexNet的提出者,第三位就更不用说了——深度学习的奠基人。

该数据集的下载网址为:http://www.cs.toronto.edu/~kriz/cifar.html 。这个数据主要有三个下载版本:Python、Matlab和二进制文件(适合于C语言)。由于我主要是利用tensorflow和Keras来实现VGG,因此我下载的是Python版本的数据集。从网站上可以看出,无论下载那个版本的数据集文件都不是挺大,足够学习跑跑程序用。
在这里插入图片描述
下面开始导入Cifar10数据集。将官网上下载的数据集打开之后,文件结构如下图所示。主要包含了5个data_batch文件data_batch_1至data_batch_5、1个test_batch文件和1个batches的meta文件。
在这里插入图片描述
Cifar10数据集官网上的介绍来看,5个data_batch文件和test_batch文件是利用pickel序列化之后的文件因此在导入 Cifar10数据集必须利用pickel进行解压数据,之后将数据还原。5个data_batch文件和test_batch文件分别代表5个训练集批次和测试集,因此我们首先利用pickel编写解压函数:

import pickle as pk

def unpickle(data_path):
    """
    这是解压pickle数据的函数
    :param data_path: 数据路径
    """
    # 解压数据
    with open(data_path,'rb') as f:
        data_dict = pk.load(f,encoding='latin1')
        # 获取标签,形状(10000,)
        labels = np.array(data_dict['labels'])
        # 获取图像数据
        data = np.array(data_dict['data'])
        # 转换图像数据形状,形状为(10000,32,32,3)
        data = np.reshape(data,(10000,3,32,32)).transpose(0,2,3,1).astype("float")
    return data,labels

在下面项目中训练Mini-VGG时候使用Keras官方的ImageDataGenerator来构造数据生成器,因此首先需要将官方CIFAR10数据集存储转化成适应ImageDataGenerator接口的数据集格式。在这里我们将CIFAR官方中data_batch1至data_batch5作为训练集,test_batch作为验证集,即训练集有5万张图片,验证集有1万张图片。

同时,6万张图像需要利用opencv库重新写入内存这会涉及大量I/O操作,因此为了加快图像写入内存速度,采用了异步多进程方式来实现CIFAR10数据集的写入内存。CIFAR10数据集格式转化脚本如下:

# -*- coding: utf-8 -*-
# @Time    : 2020/3/24 11:39
# @Author  : Dai PuWei
# @Email   : 771830171@qq.com
# @File    : cifar_preprocess.py
# @Software: PyCharm

import os
import cv2
import numpy as np
import pickle as pk
import multiprocessing
from multiprocessing import Pool

def unpickle(data_path):
    """
    这是解压pickle数据的函数
    :param data_path: 数据路径
    """
    # 解压数据
    with open(data_path,'rb') as f:
        data_dict = pk.load(f,encoding='latin1')
        # 获取标签,
        #labels = np.array(data_dict['fine_labels'])          # CIFAR100数据集
        labels = np.array(data_dict['labels'])                # CIFAR10数据集
        # 获取图像数据
        data = np.array(data_dict['data'])
        # 转换图像数据形状,形状为(10000,32,32,3)
        size = len(data)
        data = np.reshape(data,(size,3,32,32)).transpose(0,2,3,1).astype("int32")
    return data,labels

def save_single_iamge(image,image_path):
    """
    这是保存单张图像的函数
    :param image: 图像
    :param image_path: 图像地址
    :return:
    """
    print(image_path)
    cv2.imwrite(image_path,image)

def save_batch_images(batch_images,batch_image_ptahs):
    """
    这是保存批量图像的函数
    :param batch_images: 批量图像
    :param batch_image_ptahs: 图量图像地址
    :return:
    """
    for image,image_path in zip(batch_images,batch_image_ptahs):
        save_single_iamge(image,image_path)

def cifar_preprocess(cifar_dataset_dir,new_cifar_dataset_dir,batch_size):
    """
    这是对CIFAR数据集进行预处理的函数
    :param cifar_dataset_dir: cifar数据集
    :param new_cifar_dataset_dir: 新cifar数据集
    :param batch_size: 小批量数据集规模
    :return:
    """
    # 初始化路径原始CIFAR10数据集训练集和测试集路径
    train_batch_paths = [os.path.join(cifar_dataset_dir,"data_batch_%d"%(i+1)) for i in range(5)]
    val_batch_path = os.path.join(cifar_dataset_dir,'test_batch')

    # 初始化新格式下CIFAR10数据集的训练、验证和测试集目录
    new_train_dataset_dir = os.path.join(new_cifar_dataset_dir, "train")
    new_val_dataset_dir = os.path.join(new_cifar_dataset_dir, 'val')
    if not os.path.exists(new_cifar_dataset_dir):
        os.mkdir(new_cifar_dataset_dir)
    if not os.path.exists(new_train_dataset_dir):
        os.mkdir(new_train_dataset_dir)
    if not os.path.exists(new_val_dataset_dir):
        os.mkdir(new_val_dataset_dir)

    # 初始化训练验证集中,每个类别图像的目录
    label_names = ["airplane","automobile","bird","cat","deer","dog","frog","horse","ship","truck"]
    for label_name in label_names:
        train_label_dir = os.path.join(new_train_dataset_dir,label_name)
        val_label_dir = os.path.join(new_val_dataset_dir, label_name)
        if not os.path.exists(train_label_dir):
            os.mkdir(train_label_dir)
        if not os.path.exists(val_label_dir):
            os.mkdir(val_label_dir)

    # 解析原始数据集文件
    train_data = []
    train_labels = []
    for i,train_batch_path in enumerate(train_batch_paths):
        batch_images,batch_labels = unpickle(train_batch_path)
        train_data.append(batch_images)
        train_labels.append(batch_labels)
    train_data = np.concatenate(train_data,axis=0)
    train_labels = np.concatenate(train_labels,axis=0)
    val_data,val_labels = unpickle(val_batch_path)
    print("训练数据集维度:",np.shape(train_data))
    print("训练标签集维度:",np.shape(train_labels))
    print("验证数据集维度:",np.shape(val_data))
    print("验证标签集维度:",np.shape(val_labels))

    # 遍历每种类别,分别获取每种类别图像,然后按照比例划分成训练和测试集,并生成对应图像的路径
    train_index = np.arange(len(train_labels))
    train_images = []
    new_train_image_paths = []
    for i,label_name in enumerate(label_names):
        # 获取指定类别的下标
        label_index = np.random.permutation(train_index[train_labels == i])
        train_images.append(train_data[label_index])
        # 生成指定类别训练集图像的路径
        batch_new_train_image_paths = []
        for j,index in enumerate(label_index):
            image_name = "%06d.jpg"%(j)
            new_train_image_path = os.path.join(new_train_dataset_dir,label_name,image_name)
            batch_new_train_image_paths.append(new_train_image_path)
        new_train_image_paths.append(np.array(batch_new_train_image_paths))
    train_images = np.concatenate(train_images,axis=0)
    new_train_image_paths = np.concatenate(new_train_image_paths,axis=0)

    # 遍历每种类别,分别获取每种类别的验证图像,并生成对应测试图像的路径
    val_index = np.arange(len(val_labels))
    val_images = []
    new_val_image_paths = []
    for i, label_name in enumerate(label_names):
        label_index = val_index[val_labels == i]
        val_images.append(val_data[label_index])
        batch_new_test_image_paths = []
        for j, index in enumerate(label_index):
            image_name = "%06d.jpg" % (j)
            new_val_image_path = os.path.join(new_val_dataset_dir, label_name, image_name)
            batch_new_test_image_paths.append(new_val_image_path)
        new_val_image_paths.append(np.array(batch_new_test_image_paths))
    val_images = np.concatenate(val_images,axis=0)
    new_val_image_paths = np.concatenate(new_val_image_paths,axis=0)

    print("训练数据集维度:", np.shape(train_images))
    print("训练标签集维度:", np.shape(new_train_image_paths))
    print("验证数据集维度:", np.shape(val_images))
    print("验证标签集维度:", np.shape(new_val_image_paths))
    print(new_train_image_paths)
    print(new_val_image_paths)

    # 按照每种类别,分别划分成小批量训练集,利用多进程保存图像
    print("Start generating train dataset")
    train_size = len(new_train_image_paths)
    pool = Pool(processes=multiprocessing.cpu_count())
    for i,start in enumerate(np.arange(0,train_size,batch_size)):
        end = int(np.min([start+batch_size,train_size]))               # 防止最后一组数量不足batch_size
        batch_train_images = train_images[start:end]
        batch_new_train_image_paths = new_train_image_paths[start:end]
        #print("进程%d需要处理%d张图像"%(i,len(batch_new_train_image_paths)))
        pool.apply_async(save_batch_images,args=(batch_train_images,batch_new_train_image_paths))
    pool.close()
    pool.join()
    print("Finish generating train dataset")

    # 按照每种类别,分别划分成小批量训练集,利用多进程保存图像
    print("Start generating val dataset")
    val_size = len(new_val_image_paths)
    pool = Pool(processes=multiprocessing.cpu_count())
    for i, start in enumerate(np.arange(0, val_size, batch_size)):
        end = int(np.min([start + batch_size, train_size]))  # 防止最后一组数量不足batch_size
        batch_val_images = val_images[start:end]
        batch_new_val_image_paths = new_val_image_paths[start:end]
        #print("进程%d需要处理%d张图像" % (i, len(batch_new_val_image_paths)))
        pool.apply_async(save_batch_images, args=(batch_val_images, batch_new_val_image_paths))
    pool.close()
    pool.join()
    print("Finish generating val dataset")

def run_main():
    """
       这是主函数
    """
    cifar_dataset_dir = os.path.abspath("./data/cifar-10-batches-py")
    new_cifar_dataset_dir = os.path.abspath("./data/cifar10")
    batch_size = 2000
    cifar_preprocess(cifar_dataset_dir, new_cifar_dataset_dir,batch_size)

if __name__ == '__main__':
    run_main()

结果如下:
在这里插入图片描述
在这里插入图片描述


二、训练阶段Mini-VGG的keras实现

2.1 Mini-VGG的网络架构

由于CIFAR10数据集中所有图片的分辨率为32 * 32,VGG16的下采样率为32,那么使用VGG16来实现CIFAR10数据集的分类任务,那么CIFAR10数据集的图像在经过VGG16的卷积模块作用下提取得到特征维度为1 * 1 * 1024。那么这将导致大量特征丢失,反而不利于图像分类。因此为了技能提取得到的特征又能使得特征图不为1 * 1 * 1024,在本次项目中我们对VGG16的结构进行有所删减,形成Mini-VGG。

Mini-VGG的网络架构为:
第一层:INPUT =>

第二层:CONV => ReLU => BN => CONV => ReLU=>BN =>MAXPOOL => DROPOUT =>

第三层:CONV =>ReLU => BN => CONV =>ReLU =>BN => MAXPOOL => DROPOUT =>

第四层:FC => ReLU => BN => DROPOUT =>

第五层:FC => SOFTMAX

Layer TypeOutput SizeFilter Size/Stride
Input Imgae32 * 32 * 3
Conv32 * 32 * 323 * 3,K=32
ReLU32 * 32 * 32
BN32 * 32 * 32
Conv32 * 32 * 323 * 3,K=32
ReLU32 * 32 * 32
BN32 * 32 * 32
MaxPool16 * 16 * 322 * 2
Dropout16 * 16 * 32
Conv16* 16 * 643 * 3,K=64
ReLU16* 16 * 64
BN16* 16 * 64
Conv16* 16 * 643 * 3,K=64
ReLU16* 16 * 64
BN16* 16 * 64
MaxPool8* 8 * 642 * 2
Dropout8 * 8 * 64
FC512
ReLU512
BN512
Dropout512
FC10
Softmax10

2.2 Mini-VGG训练

Mini-VGG的keras实现与利用数据集生成器进行训练的代码如下,由于训练过程中涉及较多参数,为了在Mini-VGG类代码编写过程中指定过多参数,首先实现参数配置类config用来保存训练过程中所有相关参数,并实现将所有参数保存到本地txt文件函数,方便训练过后查看每次训练的相关细节。参数配置类config的定义如下:

# -*- coding: utf-8 -*-
# @Time    : 2020/5/30 16:12
# @Author  : Dai PuWei
# @Email   : 771830171@qq.com
# @File    : config.py
# @Software: PyCharm

import os

class config(object):

    default_dict = {
        "dataset_dir": os.path.abspath("./data/cifar10"),
        "checkpoints_dir": os.path.abspath("./checkpoints"),
        "logs_dir": os.path.abspath("./logs"),
        "result_dir": os.path.abspath("./result"),
        "config_dir": os.path.abspath("./config"),
        "input_image_shape": (32, 32, 3),
        "pre_model_path": None,
        "bacth_size": 16,
        "init_learning_rate": 0.01,
        "epoch": 50,
    }

    def __init__(self,**kwargs):
        """
        这是VGG16的初始化函数
        :param cfg: 参数配置类
        """
        # 初始化相关参数
        self.__dict__.update(self.default_dict)
        self.__dict__.update(kwargs)

        # 初始化相关目录
        if not os.path.exists(self.checkpoints_dir):
            os.mkdir(self.checkpoints_dir)
        if not os.path.exists(self.logs_dir):
            os.mkdir(self.logs_dir)
        if not os.path.exists(self.result_dir):
            os.mkdir(self.result_dir)
        if not os.path.exists(self.config_dir):
            os.mkdir(self.config_dir)

    def save_logs(self, time):
        """
        这是保存模型训练相关参数的函数
        :param time: 时间
        :return:
        """
        # 创建本次训练相关目录
        self.checkpoints_dir = os.path.join(self.checkpoints_dir, time)
        self.logs_dir = os.path.join(self.logs_dir, time)
        self.config_dir = os.path.join(self.config_dir, time)
        self.result_dir = os.path.join(self.result_dir, time)
        if not os.path.exists(self.config_dir):
            os.mkdir(self.config_dir)
        if not os.path.exists(self.checkpoints_dir):
            os.mkdir(self.checkpoints_dir)
        if not os.path.exists(self.logs_dir):
            os.mkdir(self.logs_dir)
        if not os.path.exists(self.result_dir):
            os.mkdir(self.result_dir)

        config_txt_path = os.path.join(self.config_dir, "config.txt")
        with open(config_txt_path, 'a') as f:
            for key, value in self.__dict__.items():
                s = key + ": " + str(value) + "\n"
                f.write(s)

接下来是训练阶段Mini-VGG类的定义如下:

# -*- coding: utf-8 -*-
# @Time    : 2020/5/24 17:11
# @Author  : Dai PuWei
# @Email   : 771830171@qq.com
# @File    : mini_vgg_train.py
# @Software: PyCharm

import os
import datetime
import numpy as np
from matplotlib import pyplot as plt

from keras.layers import Input
from keras.layers import Dense
from keras.layers import Dropout
from keras.layers import Flatten
from keras.layers import MaxPooling2D
from keras.layers import Convolution2D
from keras.layers.normalization import BatchNormalization

from keras import Model
from keras.optimizers import Adam
from keras import backend as K

from keras.callbacks import TensorBoard
from keras.callbacks import EarlyStopping
from keras.callbacks import ModelCheckpoint
from keras.callbacks import ReduceLROnPlateau

class mini_VGG(object):

    def __init__(self,cfg):
        """
        这是VGG16的初始化函数
        :param cfg: 参数配置类
        """
        # 初始化相关参数
        self.cfg = cfg

        # 搭建MiNi-VGG,并编译模型
        self.build_model()
        """
        self.model.compile(optimizer=SGD(lr=self.init_learning_rate, momentum=0.9,
                                         nesterov=True,decay= 0.01 / self.epoch),
                           loss=["categorical_crossentropy"],metrics=["acc"])
        """
        self.model.compile(optimizer=Adam(lr=self.cfg.init_learning_rate),
                           loss=["categorical_crossentropy"], metrics=["acc"])
        if self.cfg.pre_model_path is not None:
            self.model.load_weights(self.cfg.pre_model_path,by_name=True,skip_mismatch=True)
            print("loads model from: ",self.cfg.pre_model_path)

    def build_model(self):
        """
        这是Mini-VGG网络的搭建函数
        :return:
        """
        # 初始化网络输入
        self.image_input = Input(shape=self.cfg.input_image_shape,name="image_input")

        y = Convolution2D(filters=64, kernel_size=3, strides=1, padding='same', activation='relu',
                          kernel_initializer='he_normal')(self.image_input)
        y = BatchNormalization()(y)
        y = Convolution2D(filters=64, kernel_size=3, strides=1, padding='same', activation='relu',
                          kernel_initializer='he_normal')(y)
        y = BatchNormalization()(y)
        y = MaxPooling2D(pool_size=2, strides=2, padding='same')(y)
        y = Dropout(0.25)(y)

        y = Convolution2D(filters=128, kernel_size=3, strides=1, padding='same', activation='relu',
                          kernel_initializer='he_normal')(y)
        y = BatchNormalization()(y)
        y = Convolution2D(filters=128, kernel_size=3, strides=1, padding='same', activation='relu',
                          kernel_initializer='he_normal')(y)
        y = BatchNormalization()(y)
        y = MaxPooling2D(pool_size=2, strides=2, padding='same')(y)

        y = Convolution2D(filters=256, kernel_size=3, strides=1, padding='same', activation='relu',
                          kernel_initializer='he_normal')(y)
        y = BatchNormalization()(y)
        y = Convolution2D(filters=256, kernel_size=3, strides=1, padding='same', activation='relu',
                          kernel_initializer='he_normal')(y)
        y = BatchNormalization()(y)
        y = MaxPooling2D(pool_size=2, strides=2, padding='same')(y)
        y = Dropout(0.25)(y)

        y = Flatten()(y)
        y = Dense(512, activation='relu', kernel_initializer='he_normal')(y)
        y = BatchNormalization()(y)
        y = Dropout(0.5)(y)
        y = Dense(10, activation='softmax', kernel_initializer='he_normal')(y)

        self.model = Model(self.image_input,y,name="Mini-VGG")
        self.model.summary()

    def train(self,train_datagen,val_datagen,train_iter_num,val_iter_num,init_epoch=0):
        """
        这是VGG16的训练函数
        :param train_datagen: 训练数据集生成器
        :param val_datagen: 验证数据集生成器
        :param train_iter_num: 一个epoch训练迭代次数
        :param val_iter_num: 一个epoch验证迭代次数
        :param init_epoch: 初始周期数
        """
        # 初始化相关文件目录路径,并保存到日志文件
        time = datetime.datetime.now().strftime("%Y%m%d%H%M%S")
        self.cfg.save_logs(time)

        # 初始化回调函数
        tensorboard = TensorBoard(self.cfg.logs_dir,)
        early_stop = EarlyStopping(monitor='val_loss',min_delta=1e-6,verbose=1,patience=10)
        reduce_lr = ReduceLROnPlateau(monitor='val_loss',factor=0.5,verbose=1,patience=2)
        checkpint1 = ModelCheckpoint(filepath=os.path.join(self.cfg.checkpoints_dir,
                                                          'ep{epoch:03d}-loss{loss:.3f}-val_loss{val_loss:.3f}-'
                                                          'acc{acc:.3f}-val_acc{val_acc:.3f}.h5'),
                                    monitor='val_loss', save_best_only=True,verbose=1)
        checkpint2 = ModelCheckpoint(filepath=os.path.join(self.cfg.checkpoints_dir,
                                                          'ep{epoch:03d}-loss{loss:.3f}-val_loss{val_loss:.3f}-'
                                                          'acc{acc:.3f}-val_acc{val_acc:.3f}.h5'),
                                    monitor='val_acc', save_best_only=True, verbose=1)

        # 训练模型--第一阶段
        history1 = self.model.fit_generator(train_datagen,steps_per_epoch=train_iter_num,
                                 validation_data=val_datagen,validation_steps=val_iter_num,verbose=1,
                                 initial_epoch=init_epoch,epochs=self.cfg.epoch,
                                 callbacks=[tensorboard,checkpint1,checkpint2,early_stop,reduce_lr])
        self.model.save(os.path.join(self.cfg.checkpoints_dir,"stage1-trained-model.h5"))

        # 冻结除最后预测分类的全连接层之外的所有层参数
        for i in range(len(self.model.layers)-1):
            self.model.layers[i].trainable = False

        # 重新设置学习率
        K.set_value(self.model.optimizer.lr,self.cfg.init_learning_rate // 100)

        # 初始化回调函数
        checkpint1 = ModelCheckpoint(filepath=os.path.join(self.cfg.checkpoints_dir,
                                                           'ep{epoch:03d}-loss{loss:.3f}-val_loss{val_loss:.3f}-'
                                                           'acc{acc:.3f}-val_acc{val_acc:.3f}.h5'),
                                     monitor='val_loss', save_best_only=True, verbose=1)
        checkpint2 = ModelCheckpoint(filepath=os.path.join(self.cfg.checkpoints_dir,
                                                           'ep{epoch:03d}-loss{loss:.3f}-val_loss{val_loss:.3f}-'
                                                           'acc{acc:.3f}-val_acc{val_acc:.3f}.h5'),
                                     monitor='val_acc', save_best_only=True, verbose=1)
        reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.1, verbose=1, patience=2)
        history2 = self.model.fit_generator(train_datagen, steps_per_epoch=train_iter_num,
                                 validation_data=val_datagen, validation_steps=val_iter_num, verbose=1,
                                 initial_epoch=self.cfg.epoch, epochs=self.cfg.epoch*2,
                                 callbacks=[tensorboard, checkpint1,checkpint2,early_stop,reduce_lr])
        self.model.save(os.path.join(self.cfg.checkpoints_dir, "stage2-trained-model.h5"))

        # 绘制训练与验证损失走势图
        loss = np.concatenate([history1.history["loss"],history2.history["loss"]])
        val_loss = np.concatenate([history1.history["val_loss"], history2.history["val_loss"]])
        plt.plot(np.arange(0, len(loss)), loss, label="train_loss")
        plt.plot(np.arange(0, len(val_loss)), val_loss, label="val_loss")
        plt.title("Loss on CIFAR-10")
        plt.xlabel("Epoch")
        plt.ylabel("Loss")
        plt.legend()
        plt.grid(True)
        plt.savefig(os.path.join(self.cfg.result_dir,"loss.png"))
        plt.close()

        # 绘制训练与验证精度走势图
        acc = np.concatenate([history1.history["acc"], history2.history["acc"]])
        val_acc = np.concatenate([history1.history["val_acc"], history2.history["val_acc"]])
        plt.plot(np.arange(0, len(acc)), acc, label="train_acc")
        plt.plot(np.arange(0, len(val_acc)), val_acc, label="val_acc")
        plt.title("Accuracy on CIFAR-10")
        plt.xlabel("Epoch")
        plt.ylabel("Accuracy")
        plt.legend()
        plt.grid(True)
        plt.savefig(os.path.join(self.cfg.result_dir, "accuracy.png"))
        plt.close()

那么Mini-VGG的训练脚本如下,在训练过程中为了增加模型的鲁棒性,对于训练集和测试集都进行相关的数据增强,包括旋转、裁剪、水平翻转、垂直翻转、亮度变化等。

# -*- coding: utf-8 -*-
# @Time    : 2020/5/24 22:53
# @Author  : Dai PuWei
# @Email   : 771830171@qq.com
# @File    : train_mini_vgg.py
# @Software: PyCharm

import os
#os.environ["CUDA_VISIBLE_DEVICES"] = "3"

from config.config import config
from model.mini_vgg_train import mini_VGG
from keras.preprocessing.image import ImageDataGenerator

def run_main():
    """
       这是主函数
    """
    # 初始化参数配置类
    batch_size = 128
    epoch = 50
    cfg = config(epoch = epoch,batch_size=batch_size)

    # 构造训练集和测试集数据生成器
    train_dataset_dir = os.path.abspath("./data/cifar10/train")
    val_dataset_dir = os.path.abspath("./data/cifar10/val")
    image_data =  ImageDataGenerator(rotation_range=0.2,
                                    width_shift_range=0.05,
                                    height_shift_range=0.05,
                                    shear_range=0.05,
                                    zoom_range=0.05,
                                    horizontal_flip=True,
                                    vertical_flip=True,
                                    rescale= 1.0/255)
    train_datagen = image_data.flow_from_directory(train_dataset_dir,
                                                   class_mode='categorical',
                                                   batch_size = batch_size,
                                                   target_size=(32,32),
                                                   shuffle=True)
    val_datagen = image_data.flow_from_directory(val_dataset_dir,
                                                 class_mode='categorical',
                                                 batch_size=batch_size,
                                                 target_size=(32,32),
                                                 shuffle=True)
    train_iter_num = train_datagen.samples // batch_size
    val_iter_num = val_datagen.samples // batch_size
    if train_datagen.samples % batch_size != 0:
        train_iter_num += 1
    if val_datagen.samples % batch_size != 0:
        val_iter_num += 1

    # 初始化VGG16,并进行测试批量图像
    mini_vgg = mini_VGG(cfg)
    mini_vgg.train(train_datagen=train_datagen,
                    val_datagen=val_datagen,
                    train_iter_num=train_iter_num,
                    val_iter_num=val_iter_num)

if __name__ == '__main__':
    run_main()

经过100个epoch的训练之后,Mini-VGG的训练与验证损失、训练与验证精度的走势图如下图所示。由于在训练过程中设置了早停回调函数,因此100个epoch的训练在不到50个epoch训练周期就结束了。从下损失和精度的走势图可以得知,在验证集上虽然损失在上下波动,但是精度收敛到了85%附近。因此,Mini-VGG在对CIFAR10数据集经过训练之后,拥有较高的分类性能。
在这里插入图片描述
在这里插入图片描述

三、测试阶段Mini-VGG的keras实现

在训练结束之后,下一步就是准确评估Mini-VGG在数据集上的性能。测试阶段的Mini-VGG类的定义如下:

# -*- coding: utf-8 -*-
# @Time    : 2020/5/24 17:11
# @Author  : Dai PuWei
# @Email   : 771830171@qq.com
# @File    : mini_vgg_test.py
# @Software: PyCharm

import os
import numpy as np
from sklearn.metrics import classification_report

from keras import Model
from keras.models import load_model
from keras.layers import Input
from keras.layers import Dense
from keras.layers import Dropout
from keras.layers import Flatten
from keras.layers import MaxPooling2D
from keras.layers import Convolution2D
from keras.layers.normalization import BatchNormalization

class mini_VGG(object):

    _default_dict_ = {
        "input_image_shape": (32,32,3),
        "model_path": os.path.abspath("./data/mini_vgg.h5")
    }

    def __init__(self,**kwargs):
        """
        这是VGG16的初始化函数
        :param kwargs: 参数字典
        """
        # 初始化相关参数
        self.__dict__.update(self._default_dict_)
        self.__dict__.update(kwargs)

        # 加载模型
        try:
            self.model = load_model(self.model_path)
        except:
            self.build_model()      # 搭建MiNi-VGG
            self.model.load_weights(self.model_path,by_name=True,skip_mismatch=True)
        print("loads model from: ",self.model_path)

    def build_model(self):
        """
        这是Mini-VGG网络的搭建函数
        :return:
        """
        # 初始化网络输入
        self.image_input = Input(shape=self.input_image_shape,name="image_input")

        y = Convolution2D(filters=64, kernel_size=3, strides=1, padding='same', activation='relu',
                          kernel_initializer='he_normal')(self.image_input)
        y = BatchNormalization()(y)
        y = Convolution2D(filters=64, kernel_size=3, strides=1, padding='same', activation='relu',
                          kernel_initializer='he_normal')(y)
        y = BatchNormalization()(y)
        y = MaxPooling2D(pool_size=2, strides=2, padding='same')(y)
        y = Dropout(0.25)(y)

        y = Convolution2D(filters=128, kernel_size=3, strides=1, padding='same', activation='relu',
                          kernel_initializer='he_normal')(y)
        y = BatchNormalization()(y)
        y = Convolution2D(filters=128, kernel_size=3, strides=1, padding='same', activation='relu',
                          kernel_initializer='he_normal')(y)
        y = BatchNormalization()(y)
        y = MaxPooling2D(pool_size=2, strides=2, padding='same')(y)

        y = Convolution2D(filters=256, kernel_size=3, strides=1, padding='same', activation='relu',
                          kernel_initializer='he_normal')(y)
        y = BatchNormalization()(y)
        y = Convolution2D(filters=256, kernel_size=3, strides=1, padding='same', activation='relu',
                          kernel_initializer='he_normal')(y)
        y = BatchNormalization()(y)
        y = MaxPooling2D(pool_size=2, strides=2, padding='same')(y)
        y = Dropout(0.25)(y)

        y = Flatten()(y)
        y = Dense(512, activation='relu', kernel_initializer='he_normal')(y)
        y = BatchNormalization()(y)
        y = Dropout(0.5)(y)
        y = Dense(10, activation='softmax', kernel_initializer='he_normal')(y)

        self.model = Model(self.image_input,y,name="Mini-VGG")
        self.model.summary()

    def eval_generator(self,datagen,iter_num,label_names):
        """
        这是利用数据集生成器对模型进行评估的函数
        :param datagen: 数据集生成器
        :param iter_num: 数据集生成器迭代次数
        :param label_names: 标签名称
        :return:
        """
        y_real = []
        y_pred = []
        for i in np.arange(iter_num):
            batch_images,batch_real_labels = datagen.__next__()
            y_real.append(np.argmax(batch_real_labels,axis=-1))
            batch_pred_labels = self.model.predict_on_batch(batch_images)
            y_pred.append(np.argmax(batch_pred_labels,axis=-1))
        y_real = np.concatenate(y_real)
        y_pred = np.concatenate(y_pred)

        return classification_report(y_real,y_pred,target_names=label_names)

在一个数据集上评估Mini-VGG性能的脚本如下:

# -*- coding: utf-8 -*-
# @Time    : 2020/3/24 19:15
# @Author  : Dai PuWei
# @Email   : 771830171@qq.com
# @File    : eval_on_dataset.py
# @Software: PyCharm

import os
os.environ["CUDA_VISIBLE_DEVICES"] = "3"

from model.mini_vgg_test import mini_VGG
from keras.preprocessing.image import ImageDataGenerator

def run_main():
    """
       这是主函数
    """
    # 初始化参数配置类
    image_data = ImageDataGenerator(rotation_range=0.2,
                                    width_shift_range=0.05,
                                    height_shift_range=0.05,
                                    shear_range=0.05,
                                    zoom_range=0.05,
                                    horizontal_flip=True,
                                    vertical_flip=True,
                                    rescale=1.0 / 255)

    # 构造验证集数据生成器
    #dataset_dir = os.path.join(cfg.dataset_dir, "train")
    dataset_dir = os.path.abspath("./data/cifar10/val")
    image_data = ImageDataGenerator(rescale=1.0 / 255)
    datagen = image_data.flow_from_directory(dataset_dir,
                                             class_mode='categorical',
                                             batch_size = 1,
                                             target_size=(32,32),
                                             shuffle=False)

    # 初始化相关参数
    iter_num = datagen.samples       # 训练集1个epoch的迭代次数
    label_names = ['airplane','automobile','bird','cat','deer','dog','frog','horse','ship','truck']

    # 初始化VGG16,并进行训练
    model_path = os.path.abspath("./checkpoints/20200530195246/stage2-trained-model.h5")
    mini_vgg = mini_VGG(model_path=model_path)
    print(mini_vgg.eval_generator(datagen,iter_num,label_names))

if __name__ == '__main__':
    run_main()

Mini-VGG在CIFAR10的验证集上的评估结果如下:
在这里插入图片描述

评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

daipuweiai

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值