2.2 Mnist手写数据集
全连接网络:网络层的每一个结点都与上一层的所有结点相连。
多隐层全连接神经网络:
代码如下:
1. 导入必要的模块
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.keras import datasets, Input, Model
from tensorflow.keras.layers import Flatten, Dense, Activation, BatchNormalization
from tensorflow.keras.initializers import TruncatedNormal
2. 载入Mnist数据集
(x_train, y_train), (x_test, y_test) = datasets.mnist.load_data()
print(x_train.shape)
print(y_train.shape)
print(x_test.shape)
print(y_test.shape)
输出如下:
(60000, 28, 28)
(60000,)
(10000, 28, 28)
(10000,)
维度 ( 60000 , 28 , 28 ) → ( 60000 , 784 ) (60000, 28, 28) \to (60000, 784) (60000,28,28)→(60000,784)
x_train = x_train.reshape(-1, 28 * 28)
x_test = x_test.reshape(-1, 28 * 28)
归一化处理:
x_train = x_train / 255
x_test = x_test / 255
3. 模型搭建
用到的api:
全连接层tf.keras.layers.Dense
用到的参数:
-
units:输入整数,全连接层神经元个数。
-
activation:激活函数。
可选项:
- ‘sigmoid’:sigmoid激活函数
- ‘tanh’:tanh激活函数
- ‘relu’:relu激活函数
- 'elu’或tf.keras.activations.elu(alpha=1.0):elu激活函数
- ‘selu’:selu激活函数
- ‘swish’: swish激活函数(tf2.2版本以上才有)
- ‘softmax’: softmax函数
-
kernel_initializer:权重初始化,默认是’glorot_uniform’(即Xavier均匀初始化)。
可选项:
- 'RandomNormal’或tf.keras.initializers.TruncatedNormal(mean=0.0, stddev=0.05):正态分布采样,均值为0,标准差0.05
- ‘glorot_normal’:正态分布采样,均值为0,方差为2 / (fan_in + fan_out)
- ‘glorot_uniform’:均匀分布采样,范围[-limit, limit],limit = sqrt(6 / (fan_in + fan_out))
- ‘lecun_normal’:正态分布采样,均值为0,方差为1 / fan_in
- ‘lecun_uniform’:均匀分布采样,范围[-limit, limit],limit = sqrt(3 / fan_in)
- ‘he_normal’:正态分布采样,均值为0,方差为2 / fan_in
- ‘he_uniform’:均匀分布采样,范围[-limit, limit],limit = sqrt(6 / fan_in)
fan_in是输入的神经元个数,fan_out是输出的神经元个数。
-
name:输入字符串,给该层设置一个名称。
输入层tf.keras.Input
用到的参数:
-
shape:输入层的大小。
-
name:输入字符串,给该层设置一个名称。
BN层tf.keras.layers.BatchNormalization
用到的参数:
-
axis:需要做批处理的维度,默认为-1,也就是输入数据的最后一个维度。
-
name:输入字符串,给该层设置一个名称。
模型设置tf.keras.Sequential.compile
用到的参数:
-
loss:损失函数,多分类任务一般使用"sparse_categorical_crossentropy",sparse表示先对标签做one hot编码。
-
optimizer:优化器,这里选用"sgd".
-
metrics:评价指标,这里选用"accuracy".
构建50个隐层的神经网络,每层100个神经元
inputs = Input(shape = 28 * 28, name = 'input')
x = Dense(units = 100, activation = 'tanh', kernel_initializer = TruncatedNormal(mean = 0.0, stddev = 1), name = 'dense_0')(inputs)
for i in range (49):
x = Dense(units = 100, activation = 'tanh', kernel_initializer = TruncatedNormal(mean = 0.0, stddev = 1), name = 'dense_' + str(i + 1))(x)
outputs = Dense(units = 10, activation = 'softmax', name = 'output')(x)
model = Model(inputs = inputs, outputs = outputs)
model.compile(loss = 'sparse_categorical_crossentropy', optimizer = 'sgd', metrics = ['accuracy'])
查看模型:
model.summary()
4. 模型训练
model.fit(x = x_train, y = y_train, batch_size = 128, epochs = 10, validation_split = 0.3, shuffle = True)
至此,模型的准确率不足 15 % 15\% 15%
画出每一层的直方图:
layer_names = ['input', 'dense_0', 'dense_1', 'dense_1', 'dense_2', 'dense_3', 'dense_4']
fig, ax = plt.subplots(1, len(layer_names), figsize = (10, 5))
for i, name in enumerate(layer_names):
layer_model = Model(inputs = model.input, outputs = model.get_layer(name).output)
pred = layer_model.predict(x_train)
ax[i].hist(pred.reshape(-1),bins = 100)
plt.show()
特点:训练数据大多分布在1和-1,处于中间不饱和态数据较少。
5. 测试集评估结果
model.evaluate(x_test, y_test)
可以看到,最终测试集准确率不足 12 % 12\% 12%
6. 使用权重初始化
(1)使用lecun_normal
inputs = Input(shape = 28 * 28, name = 'input')
x = Dense(units = 100, activation = 'tanh', kernel_initializer = 'lecun_normal', name = 'dense_0')(inputs)
for i in range (49):
x = Dense(units = 100, activation = 'tanh', kernel_initializer = 'lecun_normal', name = 'dense_' + str(i + 1))(x)
outputs = Dense(units = 10, activation = 'softmax', name = 'output')(x)
model = Model(inputs = inputs, outputs = outputs)
model.compile(loss = 'sparse_categorical_crossentropy', optimizer = 'sgd', metrics = ['accuracy'])
训练集:
直方图:
测试集:
(2)使用he_normal
inputs = Input(shape = 28 * 28, name = 'input')
x = Dense(units = 100, activation = 'relu', kernel_initializer = 'he_normal', name = 'dense_0')(inputs)
for i in range (49):
x = Dense(units = 100, activation = 'relu', kernel_initializer = 'he_normal', name = 'dense_' + str(i + 1))(x)
outputs = Dense(units = 10, activation = 'softmax', name = 'output')(x)
model = Model(inputs = inputs, outputs = outputs)
model.compile(loss = 'sparse_categorical_crossentropy', optimizer = 'sgd', metrics = ['accuracy'])
训练集:
直方图:
测试集:
(3)使用Batch Normalization
相似的,我们可以不通过修改kernel_initializer
,添加Batch Normalization
项对饱和函数区域的修复情况,这里的kernel_initializer
改回TruncatedNormal
inputs = Input(shape = 28 * 28, name = 'input')
x = Dense(units = 100, kernel_initializer = TruncatedNormal(mean = 0.0, stddev = 1), name = 'dense_0')(inputs)
x = BatchNormalization(axis = -1)(x)
x = Activation(activation = 'tanh', name = 'activation_0')(x)
for i in range (49):
x = Dense(units = 100, kernel_initializer = TruncatedNormal(mean = 0.0, stddev = 1), name = 'dense_' + str(i + 1))(x)
x = BatchNormalization(axis = -1)(x)
x = Activation(activation = 'tanh', name = 'activation_' + str(i + 1))(x)
outputs = Dense(units = 10, activation = 'softmax', name = 'output')(x)
model = Model(inputs = inputs, outputs = outputs)
model.compile(loss = 'sparse_categorical_crossentropy', optimizer = 'sgd', metrics = ['accuracy'])
可以看到,相比于最初的
12
%
12\%
12%,Batch Normalization
对于饱和函数区域的修复还是很可观的
将epochs
设定为100,最终将稳定在
89
%
89\%
89%
7. 可视化
(1)查看数据集图片
定义画图函数plot_images
参数介绍:
-
images
:包含多张图片数据的序列。 -
labels
:包含图片对应标签的序列(序列中的元素需要是0,1,2,…,9这样的正整数)。
def plot_images(images, labels):
class_names = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
fig, axes = plt.subplots(3, 10, figsize=(20,8))
axes = axes.flatten()
for img, label, ax in zip(images, labels, axes):
ax.imshow(img)
ax.set_title(class_names[label])
ax.axis('off')
plt.tight_layout()
plt.show()
随机抽取30张训练集图片进行查看
np.random.seed(99)
index_list = np.random.randint(0,59999,30)
plot_images(x_train[index_list], y_train[index_list])
随机抽取30张测试集图片进行查看
(2)画图查看history数据的变化趋势
history = model.fit(x=x_train, y=y_train, batch_size=32,
epochs=200, validation_split=0.4,
shuffle=True)
pd.DataFrame(history.history).plot(figsize=(8, 5))
plt.grid(True)
plt.xlabel('epoch')
plt.show()
随着epoch的增加,
-
loss(蓝线)不断下降;
-
val_loss(绿线)先下降后抬升;
-
accuracy(黄线)不断上升;
-
val_accuracy(红线)刚开始上升后趋于平稳;
loss(蓝线)和val_loss(绿线)的变化趋势说明模型过拟合了。
8. 结果过拟合
(1) 增加训练数据量
- 采集更多的数据
- 数据增广(image augmentation):对已有数据做一系列随机改变,来产生相似但又不同的训练样本,从而扩大训练数据集的规模。这里不讲数据增广,有兴趣的可以查看
tf.keras.preprocessing.image.ImageDataGenerator
这个api
。
np.random.seed(43)
index_list = np.random.randint(0,59999,10000)#将随机抽取1000张改成随机抽取10000张
x_train = x_train[index_list]
y_train = y_train[index_list]
val_loss
抬升程度明显减小
accuracy: 0.9388
(2) 减小模型复杂度
- 减少隐层
- 减少神经元个数
model = Sequential()
# 展平层flatten
model.add(Flatten(input_shape=(28, 28), name='flatten'))
model.add(Dense(units=100, activation='tanh',kernel_initializer='lecun_normal'))
# 隐层dense
#for i in range(20):
# model.add(Dense(units=500, activation='tanh',kernel_initializer='lecun_normal'))
# 加正则的隐层dense
#for i in range(20):
# model.add(Dense(units=500, activation='tanh', kernel_initializer='lecun_normal',
# kernel_regularizer=tf.keras.regularizers.l2(1e-5)))
# dropout层
# model.add(Dropout(rate=0.5))
# 输出层
model.add(Dense(units=10, activation='softmax', name='logit'))
# 设置损失函数loss、优化器optimizer、评价标准metrics
model.compile(loss='sparse_categorical_crossentropy', optimizer=tf.keras.optimizers.SGD(
learning_rate=0.001), metrics=['accuracy'])
accuracy: 0.9053
(3) 加正则项
tf.keras.regularizers.l2
用到的参数:
- l:惩罚项,默认为0.01。
model = Sequential()
# 展平层flatten
model.add(Flatten(input_shape=(28, 28), name='flatten'))
# 加正则的隐层dense
for i in range(20):
model.add(Dense(units=500, activation='tanh', kernel_initializer='lecun_normal',
kernel_regularizer=tf.keras.regularizers.l2(1e-5)))
# 输出层
model.add(Dense(units=10, activation='softmax', name='logit'))
# 设置损失函数loss、优化器optimizer、评价标准metrics
model.compile(loss='sparse_categorical_crossentropy', optimizer=tf.keras.optimizers.SGD(
learning_rate=0.001), metrics=['accuracy'])
(4) 提前停止
tf.keras.callbacks.EarlyStopping
用到的参数:
-
monitor:监控的数据,一般为’val_loss’。
-
min_delta:定义模型改善的最小量,只有大于min_delta才会认为模型有改善,默认为0。
-
patience:有多少个epoch,模型没有改善,训练就会停止,默认为0。
-
restore_best_weights:是否使用监控数据最好的模型参数,如果是False,使用的是训练最后一步的模型参数,默认False。
# 设置EarlyStopping,val_loss经过10个epoch都没有改善就停止训练
earlystop = EarlyStopping(monitor='val_loss', min_delta=1e-4, patience=10, restore_best_weights=True)
最终模型训练到96epoch即终止训练。
图像:
(5)Dropout
tf.keras.layers.Dropout
用到的参数:
-
rate:神经元失效的概率,取值0~1的浮点数。
-
seed:随机种子,取正整数。
-
name:输入字符串,给该层设置一个名称。
model = Sequential()
# 展平层flatten
model.add(Flatten(input_shape=(28, 28), name='flatten'))
#model.add(Dense(units=100, activation='tanh',kernel_initializer='lecun_normal'))
# 隐层dense
#for i in range(20):
# model.add(Dense(units=500, activation='tanh',kernel_initializer='lecun_normal'))
# 加正则的隐层dense
for i in range(18):
model.add(Dense(units=500, activation='tanh', kernel_initializer='lecun_normal'))
model.add(Dense(units=500, activation='tanh', kernel_initializer='lecun_normal'))
# dropout层
model.add(Dropout(rate=0.5))
model.add(Dense(units=500, activation='tanh', kernel_initializer='lecun_normal'))
# dropout层
model.add(Dropout(rate=0.5))
# 输出层
model.add(Dense(units=10, activation='softmax', name='logit'))
# 设置损失函数loss、优化器optimizer、评价标准metrics
model.compile(loss='sparse_categorical_crossentropy', optimizer=tf.keras.optimizers.SGD(
learning_rate=0.001), metrics=['accuracy'])