深度学习原理与实践——第4章 卷积神经网络之AlexNet模型

一、AlexNet模型:

1.1、AlxeNet网络结构的特点

(1)使用ReLu函数作为激活函数:降低了Sigmoid激活函数的计算量,并且有效的避免了过度拟合和阶梯消散等问题,提高了卷积网络的预测的准确率。

(2)使用Dropout技术:在训练期间选择性地去除掉不重要的神经元,避免卷积神经模型过度拟合。

(3)引入Maxpooling下采样操作:减少网络参数的同时,提高预测准确率。

(4)利用双GPU架构:大大减少了模型训练时间

  如下图所示为AlexNet的网络模型,其网络结构一个有5个卷积层、3个最大池化(Maxpooling)层、2个全连接层,其输入为3 x 224 x 224大小的三通道彩色图像。

  使用了双GPU架构的AlexNet网络模型结构。其中卷积核大小从左到右分别为:11、5、3、3、3,输入的图像大小为224 x 224,最后一次池化(Pooling)后的图像大小为13 x 13。

1.2、AlexNet网络模型架构详情

  • 输入层(Input):输入为3 x 224 x 224大小图像矩阵。
  • 卷积层(Conv1):96个11 x 11大小的卷积核(每个GPU上48个卷积核)。
  • 池化层(Pool1):最大池化(Maxpooling)窗口大小为2 x 2,stide = 2。
  • 卷积层(Conv2):256个5 x 5大小的卷积核(每个GPU上128个卷积核)。
  • 池化层(Pool2):最大池化(Maxpooling)窗口大小为2 x 2,stide = 2。
  • 卷积层(Conv3):384个3 x 3大小的卷积核(每个GPU上192个卷积核)。
  • 卷积层(Conv4):256个3 x 3大小的卷积核(每个GPU上192个卷积核)。
  • 卷积层(Conv5):256个3 x 3大小的卷积核(每个GPU上128个卷积核)。
  • 池化层(Pool5):最大池化(Maxpooling)窗口大小为2 x 2,stide = 2。
  • 全连接层(FC1):第一个全连接层,将第五层池化(Pooling)层的输出连接成为一个一维向量作为该层的输入,输出4096个神经节点。
  • 全连接层(FC2):第二个全连接层,输入输出均为4096个神经节点。
  • Softmax输出层:输出为1000个神经节点,输出的每个神经节点单独对应图像所属分类的概率。因为在ImageNet数据集中有1000个分类,因此设定输出维度为1000。

二、AlexNet网络模型架构的实现

2.1、使用Kaggle猫狗数据库

  其中,猫狗数据集文件结构存放方式如图:

2.2、数据多样化:

from keras.layers.convolutional import Conv2D, MaxPooling2D
from keras import Model
from keras.layers import Flatten, Dense, Input
from keras.layers import Convolution2D, MaxPooling2D
from keras.preprocessing import image
from keras.layers.core import Dense, Dropout, Activation
from keras.layers import concatenate
import pandas
from keras.optimizers import SGD
from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img
import numpy as np
import matplotlib.pyplot as plt

nb_classes = 2
input_shape = (3, 227, 227)

batch_size = 16 # 每次批次训练集的大小

train_datagen = ImageDataGenerator(
    rescale = 1./225,
    shear_range = 0.2,
    zoom_range = 0.2,  # 放大缩小的倍数
    horizontal_flip = True
)
test_datagen = ImageDataGenerator(rescale = 1./255)

train_generator = train_datagen.flow_from_directory(
    'data/train',
    shuffle = True,
    target_size = input_shape[1:],
    batch_size = batch_size,
    class_mode = 'categorical'
)

validation_generator = test_datagen.flow_from_directory(
    'data/test',
    shuffle = True,
    target_size = input_shape[1:],
    batch_size = batch_size,
    class_mode = 'categorical'

)

2.3、实现单GPU可运行的AlexNet网络模型架构:

如下图所示为AlexNet的单GPU网络模型,其网络结构一个有5个卷积层、3个最大池化(Maxpooling)层、2个连接层,其输入为3 x 224 x 224大小的三通道彩色图像。

  使用了单GPU架构的AlexNet网络模型结构。其中卷积核大小从左到右分别为:11、5、3、3、3,输入的图像大小为224 x 224,最后一次池化(Pooling)后的图像大小也为13 x 13。

  • 输入层(Input):输入为3 x 224 x 224大小图像矩阵。
  • 卷积层(Conv1):48个11 x 11大小的卷积核。
  • 池化层(Pool1):最大池化(Maxpooling)窗口大小为2 x 2,stide = 2。
  • 卷积层(Conv2):128个5 x 5大小的卷积核。
  • 池化层(Pool2):最大池化(Maxpooling)窗口大小为2 x 2,stide = 2。
  • 卷积层(Conv3):192个3 x 3大小的卷积核。
  • 卷积层(Conv4):192个3 x 3大小的卷积核。
  • 卷积层(Conv5):128个3 x 3大小的卷积核。
  • 池化层(Pool5):最大池化(Maxpooling)窗口大小为2 x 2,stide = 2。
  • 全连接层(FC1):第一个全连接层,将第五层池化(Pooling)层的输出连接成为一个一维向量作为该层的输入,输出2048个神经节点。
  • 全连接层(FC2):第二个全连接层,输入输出均为2048个神经节点。
  • Softmax输出层:输出为1000个神经节点,输出的每个神经节点单独对应图像所属分类的概率。因为在ImageNet数据集中有1000个分类,因此设定输出维度为1000。

实现完整代码如下:

from keras.layers.convolutional import Conv2D, MaxPooling2D
from keras import Model
from keras.layers import Flatten, Dense, Input
from keras.layers import Convolution2D, MaxPooling2D
from keras.preprocessing import image
from keras.layers.core import Dense, Dropout, Activation
from keras.layers import concatenate
import pandas
from keras.optimizers import SGD
from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img
import numpy as np
import matplotlib.pyplot as plt

#数据多样化

nb_classes = 2
input_shape = (3, 227, 227)

batch_size = 16 # 每次批次训练集的大小
train_datagen = ImageDataGenerator(
    rescale = 1./225,
    shear_range = 0.2,
    zoom_range = 0.2,
    horizontal_flip = True
)
test_datagen = ImageDataGenerator(rescale = 1./255)

train_generator = train_datagen.flow_from_directory(
    'data/train',
    shuffle = True,
    target_size = input_shape[1:],
    batch_size = batch_size,
    class_mode = 'categorical'
)

validation_generator = test_datagen.flow_from_directory(
    'data/test',
    shuffle = True,
    target_size = input_shape[1:],
    batch_size = batch_size,
    class_mode = 'categorical'

)



# 输入的图像大小为三通道227 * 227
input_shape = (227, 227, 3)

# Input Layer 输入层
inputs = Input(shape = input_shape, name = "input")

# Layer1 第一层:一个卷积操作和一个个池化操作
conv1 = Conv2D(48,(11, 11), strides = (4,4), activation = "relu", name ="conv1")(inputs)
pool1 = MaxPooling2D((3, 3), strides = (2, 2), name = 'pool1_1')(conv1)


# Layer2 第二层:对第一层产生的两个作为两个卷积层和池化层的独立输入
conv2 = Conv2D(128, (5, 5), activation = "relu", name = "conv2_1", padding = "same")(pool1)
pool2 = MaxPooling2D((3, 3), strides = (2, 2), name = 'pool2_1')(conv2)


# Merge 合并层:第二层进入第三层的时候把数据混合合并
# merge1 = concatenate([pool2_1], axis = 1)


# Layer3第三层:进行一个卷积操作
conv3 = Conv2D(192, (3, 3), activation = "relu", name = "conv3_1", padding = "same")(conv2)


# Layer4第四层:与第三层一样进行一个卷积操作
conv4 = Conv2D(192, (3, 3), activation = "relu", name = "conv4_1", padding = "same")(conv3)


# Layer5第五层:对第四的数据进行卷积操作和池化操作
conv5 = Conv2D(128, (3, 3), activation = "relu", name = "conv5_1", padding = "same")(conv4)
pool5 = MaxPooling2D((3, 3), strides = (2, 2), name = 'pool5_1')(conv5)


# 通过Flatten把多维的输入一维化
dense1 = Flatten(name = 'flatten')(conv5 )

# Layer6、Layer7 第六层和第七层:进行两次2048维的全连接,中间加入Dropout层避免拟合
dense2 = Dense(2048, activation = 'relu', name = 'dense2')(dense1)
dense2 = Dropout(0.5)(dense2)
dense3 = Dense(2048, activation = 'relu', name = 'dense3')(dense2)
dense3 = Dropout(0.5)(dense3)

# Output Layer 输出层:输出层输出:nb_classes个类别,输出分类函数使用softmax
dense_3 = Dense(nb_classes, name = 'dense_3')(dense3)
prediction = Activation('softmax', name='softmax')(dense_3)

# 最后定义模型的输入与输出
AlexNet = Model(input=inputs, outputs=prediction)

#优化使用随机梯度下降
sgd = SGD(lr = 0.01, decay = 1e-6, momentum = 0.9, nesterov = True)
AlexNet.compile(loss = 'categorical_crossentropy',
                    optimizer = sgd,
                    metrics = ['accuracy']
                    )

    # 训练模型,训练结果保存在history_callback
history_callback = AlexNet.fit_generator(train_generator,
                                             steps_per_epoch = 2000, # batch_size
                                             epochs = 1,
                                             validation_data = validation_generator,
                                             validation_steps = 800 # batch_size
                                             )


pandas.DataFrame(history_callback).to_csv("./AlexNet_model.csv") # 训练完储存训练预测结果与模型中的权重参数
AlexNet.save_weights('./AlexNet_model.h5')


def plot_performance(history):
    "显示训练集集与测试集的精确率和损失曲线"
    plt.subplot(1, 2, 1)
    plt.plot(history['acc'] [1:])
    plt.plot(history['val_acc'] [1:], 'r')
    plt.title('Accuracy')
    plt.ylabel('Accracy')
    plt.xlabel('Epoch')
    plt.legend(['train', 'val'], loc = 'upper left')

    plt.subplot(1, 2, 2)
    plt.plot(history['loss'][1:])
    plt.plot(history['val_acc'][1:], 'r')
    plt.title('Loss')
    plt.ylabel('Loss')
    plt.xlabel('Epoch')
    plt.legend(['train', 'val'], loc='upper left')
    plt.tight_layout()
    plt.show()

运行后:

 Using TensorFlow backend.

Found 25000 images belonging to 2 classes.
Found 63 images belonging to 2 classes.

Epoch 1/1

1/2000 [..............................] - ETA: 2:07:45 - loss: 0.6850 - accuracy: 0.6250
2/2000 [..............................] - ETA: 1:41:13 - loss: 0.6973 - accuracy: 0.5000
3/2000 [..............................] - ETA: 1:32:11 - loss: 0.6941 - accuracy: 0.5208
4/2000 [..............................] - ETA: 1:28:35 - loss: 0.6918 - accuracy: 0.5312
5/2000 [..............................] - ETA: 1:25:50 - loss: 0.6913 - accuracy: 0.5125
6/2000 [..............................] - ETA: 1:24:05 - loss: 0.6902 - accuracy: 0.5208
7/2000 [..............................] - ETA: 1:23:10 - loss: 0.6861 - accuracy: 0.5268
8/2000 [..............................] - ETA: 1:22:40 - loss: 0.6942 - accuracy: 0.5234
9/2000 [..............................] - ETA: 1:21:33 - loss: 0.6964 - accuracy: 0.5208
10/2000 [..............................] - ETA: 1:21:11 - loss: 0.6937 - accuracy: 0.5250

...........................................................................................................................................


 

2.3、实现双GPU可运行的AlexNet网络模型架构:

完整代码如下:

from keras.layers.convolutional import Conv2D, MaxPooling2D
from keras import Model
from keras.layers import Flatten, Dense, Input
from keras.layers import Convolution2D, MaxPooling2D
from keras.preprocessing import image
from keras.layers.core import Dense, Dropout, Activation
from keras.layers import concatenate
import pandas
from keras.optimizers import SGD
from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img
import numpy as np
import matplotlib.pyplot as plt

nb_classes = 2
input_shape = (3, 227, 227)

batch_size = 16 # 每次批次训练集的大小
train_datagen = ImageDataGenerator(
    rescale = 1./225,
    shear_range = 0.2,
    zoom_range = 0.2,
    horizontal_flip = True
)
test_datagen = ImageDataGenerator(rescale = 1./255)

train_generator = train_datagen.flow_from_directory(
    'data/train',
    shuffle = True,
    target_size = input_shape[1:],
    batch_size = batch_size,
    class_mode = 'categorical'
)

validation_generator = test_datagen.flow_from_directory(
    'data/test',
    shuffle = True,
    target_size = input_shape[1:],
    batch_size = batch_size,
    class_mode = 'categorical'

)



# 输入的图像大小为三通道227 * 227
input_shape = (227, 227, 3)

# Input Layer 输入层
inputs = Input(shape = input_shape, name = "input")

# Layer1 第一层:两个卷积操作和两个池化操作
conv1_1 = Conv2D(48,(11, 11), strides = (4,4), activation = "relu", name ="conv1_1")(inputs)
conv1_2 = Conv2D(48, (11,11), strides = (4, 4), activation = "relu", name = "conv1_2")(inputs)
pool1_1 = MaxPooling2D((3, 3), strides = (2, 2), name = 'pool1_1')(conv1_1)
pool1_2 = MaxPooling2D((3, 3), strides = (2, 2), name = 'pool1_2')(conv1_2)

# Layer2 第二层:对第一层产生的两个作为两个卷积层和池化层的独立输入
conv2_1 = Conv2D(128, (5, 5), activation = "relu", name = "conv2_1", padding = "same")(pool1_1)
conv2_2 = Conv2D(128, (5, 5), activation = "relu", name = "conv2_2", padding = "same")(pool1_2)
pool2_1 = MaxPooling2D((3, 3), strides = (2, 2), name = 'pool2_1')(conv2_1)
pool2_2 = MaxPooling2D((3, 3), strides = (2, 2), name = 'pool2_2')(conv2_2)

# Merge 合并层:第二层进入第三层的时候把数据混合合并

merge1 = concatenate([pool2_2, pool2_1], axis = 1)


# Layer3第三层:分别进行两个卷积操作
conv3_1 = Conv2D(192, (3, 3), activation = "relu", name = "conv3_1", padding = "same")(merge1)
conv3_2 = Conv2D(192, (3, 3), activation = "relu", name = "conv3_2", padding = "same")(merge1)

# Layer4第四层:与第三层一样分别进行两个卷积操作
conv4_1 = Conv2D(192, (3, 3), activation = "relu", name = "conv4_1", padding = "same")(conv3_1)
conv4_2 = Conv2D(192, (3, 3), activation = "relu", name = "conv4_2", padding = "same")(conv3_2)

# Layer5第五层:分别对第四的数据进行卷积操作和池化操作
conv5_1 = Conv2D(128, (3, 3), activation = "relu", name = "conv5_1", padding = "same")(conv4_1)
conv5_2 = Conv2D(128, (3, 3), activation = "relu", name = "conv5_2", padding = "same")(conv4_2)
pool5_1 = MaxPooling2D((3, 3), strides = (2, 2), name = 'pool5_1')(conv5_1)
pool5_2 = MaxPooling2D((3, 3), strides = (2, 2), name = 'pool5_2')(conv5_2)

# Merge 合并层:第五层进入全连接之前需要把分开的合并
merge2 = concatenate([pool5_1, pool5_2], axis = 1)

# 通过Flatten把多维的输入一维化
dense1 = Flatten(name = 'flatten')(merge2 )

# Layer6、Layer7 第六层和第七层:进行两次4096维的全连接,中间加入Dropout层避免拟合
dense2 = Dense(4096, activation = 'relu', name = 'dense2')(dense1)
dense2 = Dropout(0.5)(dense2)
dense3 = Dense(4096, activation = 'relu', name = 'dense3')(dense2)
dense3 = Dropout(0.5)(dense3)

# Output Layer 输出层:输出层输出:nb_classes个类别,输出分类函数使用softmax
dense_3 = Dense(nb_classes, name = 'dense_3')(dense3)
prediction = Activation('softmax', name='softmax')(dense_3)

# 最后定义模型的输入与输出
AlexNet = Model(input=inputs, outputs=prediction)

    #优化使用随机梯度下降
sgd = SGD(lr = 0.01, decay = 1e-6, momentum = 0.9, nesterov = True)
AlexNet.compile(loss = 'categorical_crossentropy',
                    optimizer = sgd,
                    metrics = ['accuracy']
                    )

    # 训练模型,训练结果保存在history_callback
history_callback = AlexNet.fit_generator(train_generator,
                                             steps_per_epoch = 2000, # batch_size
                                             epochs = 10,
                                             validation_data = validation_generator,
                                             validation_steps = 800 # batch_size
                                             )


pandas.DataFrame(history_callback).to_csv("./AlexNet_model.csv") # 训练完储存训练预测结果与模型中的权重参数
AlexNet.save_weights('./AlexNet_model.h5')

#lexNet = get_AlexNet()
#优化使用随机梯度下降
#gd = SGD(lr = 0.01, decay = 1e-6, momentum = 0.9, nesterov = True)
#lexNet.compile(loss = 'categorical_crossentropy', optimizer = sgd, metrics = ['accuracy'])
#
#对模型做出预测
#lexNet.load_weights('D:/python_pro/deep_learning_al/weights.best.hdf5')	#加载权重文件
#mg_path =  "D:/python_pro/deep_learning_al/data/80.jpg"  #待预测图片的路径
#mg = image.load_img(img_path, target_size = input_shape[0:]) #加载图像
# = image.img_to_array(img) #  转换为二维数组形式
# = np.expand_dims(x, axis = 0) #扩展图像维度为(1, 224, 224, 3)
# = x.reshape((-1, ) + input_shape) / 255 #归一化
#
#res = AlexNet.predict(x)
#rint(pres)


def plot_performance(history):
    "显示训练集集与测试集的精确率和损失曲线"
    plt.subplot(1, 2, 1)
    plt.plot(history['acc'] [1:])
    plt.plot(history['val_acc'] [1:], 'r')
    plt.title('Accuracy')
    plt.ylabel('Accracy')
    plt.xlabel('Epoch')
    plt.legend(['train', 'val'], loc = 'upper left')

    plt.subplot(1, 2, 2)
    plt.plot(history['loss'][1:])
    plt.plot(history['val_acc'][1:], 'r')
    plt.title('Loss')
    plt.ylabel('Loss')
    plt.xlabel('Epoch')
    plt.legend(['train', 'val'], loc='upper left')
    plt.tight_layout()
    plt.show()

  • 17
    点赞
  • 20
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值