本文为为🔗365天深度学习训练营内部文章
原作者:K同学啊
一 前期准备
1.导入数据
from tensorflow import keras
from tensorflow.keras import layers,models
import os, PIL, pathlib
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
data_dir = "./face/"
data_dir = pathlib.Path(data_dir)
2.查看数据
image_count = len(list(data_dir.glob('*/*.jpg')))
print("图片总数为:",image_count)
图片总数为: 1800
roses = list(data_dir.glob('Jennifer Lawrence/*.jpg'))
PIL.Image.open(str(roses[0]))
二、数据预处理
1. 加载数据
使用
image_dataset_from_directory
方法将磁盘中的数据加载到tf.data.Dataset
中测试集与验证集的关系:
- 验证集并没有参与训练过程梯度下降过程的,狭义上来讲是没有参与模型的参数训练更新的。
- 但是广义上来讲,验证集存在的意义确实参与了一个“人工调参”的过程,我们根据每一个epoch训练之后模型在valid data上的表现来决定是否需要训练进行early stop,或者根据这个过程模型的性能变化来调整模型的超参数,如学习率,batch_size等等。
- 因此,我们也可以认为,验证集也参与了训练,但是并没有使得模型去overfit验证集
batch_size = 32
img_height = 224
img_width = 224
label_mode:
int
:标签将被编码成整数(使用的损失函数应为:sparse_categorical_crossentropy loss)。
categorical
:标签将被编码为分类向量(使用的损失函数应为:categorical_crossentropy loss)。
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
data_dir,
validation_split=0.1,
subset="training",
label_mode = "categorical",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
Found 1800 files belonging to 17 classes. Using 1620 files for training.
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
data_dir,
validation_split=0.1,
subset="validation",
label_mode = "categorical",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
Found 1800 files belonging to 17 classes. Using 180 files for validation.
class_names = train_ds.class_names
print(class_names)
['Angelina Jolie', 'Brad Pitt', 'Denzel Washington', 'Hugh Jackman', 'Jennifer Lawrence', 'Johnny Depp', 'Kate Winslet', 'Leonardo DiCaprio', 'Megan Fox', 'Natalie Portman', 'Nicole Kidman', 'Robert Downey Jr', 'Sandra Bullock', 'Scarlett Johansson', 'Tom Cruise', 'Tom Hanks', 'Will Smith']
2.可视化数据
plt.figure(figsize=(20, 10))
for images, labels in train_ds.take(1):
for i in range(20):
ax = plt.subplot(5, 10, i + 1)
plt.imshow(images[i].numpy().astype("uint8"))
plt.title(class_names[np.argmax(labels[i])])
plt.axis("off")
3.再次检查数据
for image_batch, labels_batch in train_ds:
print(image_batch.shape)
print(labels_batch.shape)
break
(32, 224, 224, 3) (32, 17)
Image_batch
是形状的张量(32,224,224,3)。这是一批形状224x224x3的32张图片(最后一维指的是彩色通道RGB)。
Label_batch
是形状(32,)的张量,这些标签对应32张图片4. 配置数据集
- shuffle() :打乱数据,关于此函数的详细介绍可以参考:https://zhuanlan.zhihu.com/p/42417456
- prefetch() :预取数据,加速运行
AUTOTUNE = tf.data.AUTOTUNE
train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
三、构建CNN网络
卷积神经网络(CNN)的输入是张量 (Tensor) 形式的
(image_height, image_width, color_channels)
,包含了图像高度、宽度及颜色信息。不需要输入batch size
。color_channels 为 (R,G,B) 分别对应 RGB 的三个颜色通道(color channel)。在此示例中,我们的 CNN 输入的形状是(224, 224, 3)
即彩色图像。我们需要在声明第一层时将形状赋值给参数input_shape
。
model = models.Sequential([
layers.experimental.preprocessing.Rescaling(1./255, input_shape=(img_height, img_width, 3)),
layers.Conv2D(16, (3, 3), activation='relu', input_shape=(img_height, img_width, 3)), # 卷积层1,卷积核3*3
layers.AveragePooling2D((2, 2)), # 池化层1,2*2采样
layers.Conv2D(32, (3, 3), activation='relu'), # 卷积层2,卷积核3*3
layers.AveragePooling2D((2, 2)), # 池化层2,2*2采样
layers.Dropout(0.5),
layers.Conv2D(64, (3, 3), activation='relu'), # 卷积层3,卷积核3*3
layers.AveragePooling2D((2, 2)),
layers.Dropout(0.5),
layers.Conv2D(128, (3, 3), activation='relu'), # 卷积层3,卷积核3*3
layers.Dropout(0.5),
layers.Flatten(), # Flatten层,连接卷积层与全连接层
layers.Dense(128, activation='relu'), # 全连接层,特征进一步提取
layers.Dense(len(class_names)) # 输出层,输出预期结果
])
model.summary() # 打印网络结构
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= rescaling (Rescaling) (None, 224, 224, 3) 0 conv2d (Conv2D) (None, 222, 222, 16) 448 average_pooling2d (AverageP (None, 111, 111, 16) 0 ooling2D) conv2d_1 (Conv2D) (None, 109, 109, 32) 4640 average_pooling2d_1 (Averag (None, 54, 54, 32) 0 ePooling2D) dropout (Dropout) (None, 54, 54, 32) 0 conv2d_2 (Conv2D) (None, 52, 52, 64) 18496 average_pooling2d_2 (Averag (None, 26, 26, 64) 0 ePooling2D) dropout_1 (Dropout) (None, 26, 26, 64) 0 conv2d_3 (Conv2D) (None, 24, 24, 128) 73856 dropout_2 (Dropout) (None, 24, 24, 128) 0 flatten (Flatten) (None, 73728) 0 dense (Dense) (None, 128) 9437312 dense_1 (Dense) (None, 17) 2193 ================================================================= Total params: 9,536,945 Trainable params: 9,536,945 Non-trainable params: 0 _________________________________________________________________
四、训练模型
在准备对模型进行训练之前,还需要再对其进行一些设置。以下内容是在模型的编译步骤中添加的:
- 损失函数(loss):用于衡量模型在训练期间的准确率。
- 优化器(optimizer):决定模型如何根据其看到的数据和自身的损失函数进行更新。
- 指标(metrics):用于监控训练和测试步骤。以下示例使用了准确率,即被正确分类的图像的比率。
1.设置动态学习率
📮 ExponentialDecay函数:
tf.keras.optimizers.schedules.ExponentialDecay
是 TensorFlow 中的一个学习率衰减策略,用于在训练神经网络时动态地降低学习率。学习率衰减是一种常用的技巧,可以帮助优化算法更有效地收敛到全局最小值,从而提高模型的性能。🔎 主要参数:
- initial_learning_rate(初始学习率):初始学习率大小。
- decay_steps(衰减步数):学习率衰减的步数。在经过 decay_steps 步后,学习率将按照指数函数衰减。例如,如果 decay_steps 设置为 10,则每10步衰减一次。
- decay_rate(衰减率):学习率的衰减率。它决定了学习率如何衰减。通常,取值在 0 到 1 之间。
- staircase(阶梯式衰减):一个布尔值,控制学习率的衰减方式。如果设置为 True,则学习率在每个 decay_steps 步之后直接减小,形成阶梯状下降。如果设置为 False,则学习率将连续衰减。
# 设置初始学习率
initial_learning_rate = 1e-4
lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate,
decay_steps=60, # 敲黑板!!!这里是指 steps,不是指epochs
decay_rate=0.96, # lr经过一次衰减就会变成 decay_rate*lr
staircase=True)
# 将指数衰减学习率送入优化器
optimizer = tf.keras.optimizers.Adam(learning_rate=lr_schedule)
model.compile(optimizer=optimizer,
loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
损失函数Loss详解:
1. binary_crossentropy(对数损失函数)
与
sigmoid
相对应的损失函数,针对于二分类问题。2. categorical_crossentropy(多分类的对数损失函数)
与
softmax
相对应的损失函数,如果是one-hot编码,则使用categorical_crossentropy
调用方法一:
model.compile(optimizer="adam",
loss='categorical_crossentropy',
metrics=['accuracy'])
调用方法二:
model.compile(optimizer="adam",
loss=tf.keras.losses.CategoricalCrossentropy(),
metrics=['accuracy'])
3. sparse_categorical_crossentropy(稀疏性多分类的对数损失函数)
与
softmax
相对应的损失函数,如果是整数编码,则使用sparse_categorical_crossentropy
📌 调用方法一:
model.compile(optimizer="adam",
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
📌 调用方法二:
model.compile(optimizer="adam",
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy'])
2.早停与保存最佳模型参数
关于
ModelCheckpoint
的详细介绍可参考文章 🔗ModelCheckpoint 讲解【TensorFlow2入门手册】EarlyStopping()参数说明:
monitor
: 被监测的数据。
min_delta
: 在被监测的数据中被认为是提升的最小变化, 例如,小于 min_delta 的绝对变化会被认为没有提升。
patience
: 没有进步的训练轮数,在这之后训练就会被停止。
verbose
: 详细信息模式。
mode
: {auto, min, max} 其中之一。 在 min 模式中, 当被监测的数据停止下降,训练就会停止;在 max 模式中,当被监测的数据停止上升,训练就会停止;在 auto 模式中,方向会自动从被监测的数据的名字中判断出来。
baseline
: 要监控的数量的基准值。 如果模型没有显示基准的改善,训练将停止。
estore_best_weights
: 是否从具有监测数量的最佳值的时期恢复模型权重。 如果为 False,则使用在训练的最后一步获得的模型权重。
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping
epochs = 100
# 保存最佳模型参数
checkpointer = ModelCheckpoint('best_model.h5',
monitor='val_accuracy',
verbose=1,
save_best_only=True,
save_weights_only=True)
# 设置早停
earlystopper = EarlyStopping(monitor='val_accuracy',
min_delta=0.001,
patience=20,
verbose=1)
3.模型训练
history = model.fit(train_ds,
validation_data=val_ds,
epochs=epochs,
callbacks=[checkpointer, earlystopper])
Epoch 1/100 51/51 [==============================] - ETA: 0s - loss: 2.8034 - accuracy: 0.1025 Epoch 1: val_accuracy improved from -inf to 0.13889, saving model to best_model.h5 51/51 [==============================] - 29s 544ms/step - loss: 2.8034 - accuracy: 0.1025 - val_loss: 2.7593 - val_accuracy: 0.1389 Epoch 2/100 51/51 [==============================] - ETA: 0s - loss: 2.7337 - accuracy: 0.1235 Epoch 2: val_accuracy did not improve from 0.13889 51/51 [==============================] - 25s 489ms/step - loss: 2.7337 - accuracy: 0.1235 - val_loss: 2.7019 - val_accuracy: 0.1222 Epoch 3/100 51/51 [==============================] - ETA: 0s - loss: 2.6281 - accuracy: 0.1488 Epoch 3: val_accuracy improved from 0.13889 to 0.17222, saving model to best_model.h5 51/51 [==============================] - 26s 508ms/step - loss: 2.6281 - accuracy: 0.1488 - val_loss: 2.5811 - val_accuracy: 0.1722 Epoch 4/100 51/51 [==============================] - ETA: 0s - loss: 2.4953 - accuracy: 0.1802 Epoch 4: val_accuracy did not improve from 0.17222 51/51 [==============================] - 25s 486ms/step - loss: 2.4953 - accuracy: 0.1802 - val_loss: 2.5510 - val_accuracy: 0.1667 Epoch 5/100 51/51 [==============================] - ETA: 0s - loss: 2.3667 - accuracy: 0.2247 Epoch 5: val_accuracy improved from 0.17222 to 0.18333, saving model to best_model.h5 51/51 [==============================] - 25s 498ms/step - loss: 2.3667 - accuracy: 0.2247 - val_loss: 2.5071 - val_accuracy: 0.1833 Epoch 6/100 51/51 [==============================] - ETA: 0s - loss: 2.2595 - accuracy: 0.2457 Epoch 6: val_accuracy improved from 0.18333 to 0.20000, saving model to best_model.h5 51/51 [==============================] - 27s 529ms/step - loss: 2.2595 - accuracy: 0.2457 - val_loss: 2.5055 - val_accuracy: 0.2000 Epoch 7/100 51/51 [==============================] - ETA: 0s - loss: 2.1593 - accuracy: 0.3074 Epoch 7: val_accuracy improved from 0.20000 to 0.26667, saving model to best_model.h5 51/51 [==============================] - 27s 526ms/step - loss: 2.1593 - accuracy: 0.3074 - val_loss: 2.3860 - val_accuracy: 0.2667 Epoch 8/100 51/51 [==============================] - ETA: 0s - loss: 2.0820 - accuracy: 0.3204 Epoch 8: val_accuracy did not improve from 0.26667 51/51 [==============================] - 25s 485ms/step - loss: 2.0820 - accuracy: 0.3204 - val_loss: 2.4087 - val_accuracy: 0.2278 Epoch 9/100 51/51 [==============================] - ETA: 0s - loss: 1.9792 - accuracy: 0.3630 Epoch 9: val_accuracy did not improve from 0.26667 51/51 [==============================] - 25s 494ms/step - loss: 1.9792 - accuracy: 0.3630 - val_loss: 2.3287 - val_accuracy: 0.2444 Epoch 10/100 51/51 [==============================] - ETA: 0s - loss: 1.8801 - accuracy: 0.3889 Epoch 10: val_accuracy did not improve from 0.26667 51/51 [==============================] - 25s 493ms/step - loss: 1.8801 - accuracy: 0.3889 - val_loss: 2.4100 - val_accuracy: 0.2667 Epoch 11/100 51/51 [==============================] - ETA: 0s - loss: 1.8410 - accuracy: 0.4049 Epoch 11: val_accuracy improved from 0.26667 to 0.28333, saving model to best_model.h5 51/51 [==============================] - 25s 497ms/step - loss: 1.8410 - accuracy: 0.4049 - val_loss: 2.3887 - val_accuracy: 0.2833 Epoch 12/100 51/51 [==============================] - ETA: 0s - loss: 1.7327 - accuracy: 0.4488 Epoch 12: val_accuracy did not improve from 0.28333 51/51 [==============================] - 25s 490ms/step - loss: 1.7327 - accuracy: 0.4488 - val_loss: 2.3540 - val_accuracy: 0.2667 Epoch 13/100 51/51 [==============================] - ETA: 0s - loss: 1.6976 - accuracy: 0.4531 Epoch 13: val_accuracy improved from 0.28333 to 0.29444, saving model to best_model.h5 51/51 [==============================] - 26s 501ms/step - loss: 1.6976 - accuracy: 0.4531 - val_loss: 2.4273 - val_accuracy: 0.2944 Epoch 14/100 51/51 [==============================] - ETA: 0s - loss: 1.6117 - accuracy: 0.4796 Epoch 14: val_accuracy did not improve from 0.29444 51/51 [==============================] - 25s 482ms/step - loss: 1.6117 - accuracy: 0.4796 - val_loss: 2.3608 - val_accuracy: 0.2889 Epoch 15/100 51/51 [==============================] - ETA: 0s - loss: 1.5647 - accuracy: 0.4920 Epoch 15: val_accuracy did not improve from 0.29444 51/51 [==============================] - 26s 508ms/step - loss: 1.5647 - accuracy: 0.4920 - val_loss: 2.4020 - val_accuracy: 0.2500 Epoch 16/100 51/51 [==============================] - ETA: 0s - loss: 1.4893 - accuracy: 0.5080 Epoch 16: val_accuracy did not improve from 0.29444 51/51 [==============================] - 27s 540ms/step - loss: 1.4893 - accuracy: 0.5080 - val_loss: 2.3948 - val_accuracy: 0.2889 Epoch 17/100 51/51 [==============================] - ETA: 0s - loss: 1.3946 - accuracy: 0.5426 Epoch 17: val_accuracy did not improve from 0.29444 51/51 [==============================] - 26s 508ms/step - loss: 1.3946 - accuracy: 0.5426 - val_loss: 2.5952 - val_accuracy: 0.2333 Epoch 18/100 51/51 [==============================] - ETA: 0s - loss: 1.3637 - accuracy: 0.5667 Epoch 18: val_accuracy did not improve from 0.29444 51/51 [==============================] - 25s 497ms/step - loss: 1.3637 - accuracy: 0.5667 - val_loss: 2.4804 - val_accuracy: 0.2889 Epoch 19/100 51/51 [==============================] - ETA: 0s - loss: 1.2757 - accuracy: 0.5907 Epoch 19: val_accuracy did not improve from 0.29444 51/51 [==============================] - 26s 504ms/step - loss: 1.2757 - accuracy: 0.5907 - val_loss: 2.6045 - val_accuracy: 0.2611 Epoch 20/100 51/51 [==============================] - ETA: 0s - loss: 1.2119 - accuracy: 0.6049 Epoch 20: val_accuracy did not improve from 0.29444 51/51 [==============================] - 27s 524ms/step - loss: 1.2119 - accuracy: 0.6049 - val_loss: 2.5891 - val_accuracy: 0.2833 Epoch 21/100 51/51 [==============================] - ETA: 0s - loss: 1.1320 - accuracy: 0.6321 Epoch 21: val_accuracy improved from 0.29444 to 0.31111, saving model to best_model.h5 51/51 [==============================] - 27s 532ms/step - loss: 1.1320 - accuracy: 0.6321 - val_loss: 2.6082 - val_accuracy: 0.3111 Epoch 22/100 51/51 [==============================] - ETA: 0s - loss: 1.0988 - accuracy: 0.6500 Epoch 22: val_accuracy did not improve from 0.31111 51/51 [==============================] - 26s 517ms/step - loss: 1.0988 - accuracy: 0.6500 - val_loss: 2.6518 - val_accuracy: 0.3111 Epoch 23/100 51/51 [==============================] - ETA: 0s - loss: 1.0227 - accuracy: 0.6512 Epoch 23: val_accuracy improved from 0.31111 to 0.31667, saving model to best_model.h5 51/51 [==============================] - 26s 517ms/step - loss: 1.0227 - accuracy: 0.6512 - val_loss: 2.5977 - val_accuracy: 0.3167 Epoch 24/100 51/51 [==============================] - ETA: 0s - loss: 0.9693 - accuracy: 0.6932 Epoch 24: val_accuracy improved from 0.31667 to 0.32778, saving model to best_model.h5 51/51 [==============================] - 29s 560ms/step - loss: 0.9693 - accuracy: 0.6932 - val_loss: 2.7027 - val_accuracy: 0.3278 Epoch 25/100 51/51 [==============================] - ETA: 0s - loss: 0.9445 - accuracy: 0.6981 Epoch 25: val_accuracy improved from 0.32778 to 0.33889, saving model to best_model.h5 51/51 [==============================] - 33s 655ms/step - loss: 0.9445 - accuracy: 0.6981 - val_loss: 2.7354 - val_accuracy: 0.3389 Epoch 26/100 51/51 [==============================] - ETA: 0s - loss: 0.9030 - accuracy: 0.7068 Epoch 26: val_accuracy did not improve from 0.33889 51/51 [==============================] - 27s 535ms/step - loss: 0.9030 - accuracy: 0.7068 - val_loss: 2.6896 - val_accuracy: 0.3222 Epoch 27/100 51/51 [==============================] - ETA: 0s - loss: 0.8134 - accuracy: 0.7506 Epoch 27: val_accuracy did not improve from 0.33889 51/51 [==============================] - 25s 496ms/step - loss: 0.8134 - accuracy: 0.7506 - val_loss: 2.7703 - val_accuracy: 0.2889 Epoch 28/100 51/51 [==============================] - ETA: 0s - loss: 0.7747 - accuracy: 0.7488 Epoch 28: val_accuracy did not improve from 0.33889 51/51 [==============================] - 25s 488ms/step - loss: 0.7747 - accuracy: 0.7488 - val_loss: 2.8948 - val_accuracy: 0.3278 Epoch 29/100 51/51 [==============================] - ETA: 0s - loss: 0.7354 - accuracy: 0.7747 Epoch 29: val_accuracy improved from 0.33889 to 0.34444, saving model to best_model.h5 51/51 [==============================] - 26s 515ms/step - loss: 0.7354 - accuracy: 0.7747 - val_loss: 2.8595 - val_accuracy: 0.3444 Epoch 30/100 51/51 [==============================] - ETA: 0s - loss: 0.6932 - accuracy: 0.7796 Epoch 30: val_accuracy did not improve from 0.34444 51/51 [==============================] - 25s 481ms/step - loss: 0.6932 - accuracy: 0.7796 - val_loss: 2.7835 - val_accuracy: 0.3278 Epoch 31/100 51/51 [==============================] - ETA: 0s - loss: 0.6640 - accuracy: 0.7907 Epoch 31: val_accuracy did not improve from 0.34444 51/51 [==============================] - 26s 508ms/step - loss: 0.6640 - accuracy: 0.7907 - val_loss: 2.9785 - val_accuracy: 0.3056 Epoch 32/100 51/51 [==============================] - ETA: 0s - loss: 0.6261 - accuracy: 0.8043 Epoch 32: val_accuracy improved from 0.34444 to 0.35556, saving model to best_model.h5 51/51 [==============================] - 35s 694ms/step - loss: 0.6261 - accuracy: 0.8043 - val_loss: 2.9272 - val_accuracy: 0.3556 Epoch 33/100 51/51 [==============================] - ETA: 0s - loss: 0.6057 - accuracy: 0.7926 Epoch 33: val_accuracy did not improve from 0.35556 51/51 [==============================] - 27s 535ms/step - loss: 0.6057 - accuracy: 0.7926 - val_loss: 3.1439 - val_accuracy: 0.3556 Epoch 34/100 51/51 [==============================] - ETA: 0s - loss: 0.5483 - accuracy: 0.8210 Epoch 34: val_accuracy did not improve from 0.35556 51/51 [==============================] - 27s 528ms/step - loss: 0.5483 - accuracy: 0.8210 - val_loss: 3.0316 - val_accuracy: 0.3167 Epoch 35/100 51/51 [==============================] - ETA: 0s - loss: 0.5308 - accuracy: 0.8321 Epoch 35: val_accuracy did not improve from 0.35556 51/51 [==============================] - 27s 532ms/step - loss: 0.5308 - accuracy: 0.8321 - val_loss: 3.0749 - val_accuracy: 0.3556 Epoch 36/100 51/51 [==============================] - ETA: 0s - loss: 0.5001 - accuracy: 0.8444 Epoch 36: val_accuracy did not improve from 0.35556 51/51 [==============================] - 27s 524ms/step - loss: 0.5001 - accuracy: 0.8444 - val_loss: 3.1617 - val_accuracy: 0.3556 Epoch 37/100 51/51 [==============================] - ETA: 0s - loss: 0.4894 - accuracy: 0.8531 Epoch 37: val_accuracy did not improve from 0.35556 51/51 [==============================] - 27s 530ms/step - loss: 0.4894 - accuracy: 0.8531 - val_loss: 3.1828 - val_accuracy: 0.3444 Epoch 38/100 51/51 [==============================] - ETA: 0s - loss: 0.4417 - accuracy: 0.8593 Epoch 38: val_accuracy did not improve from 0.35556 51/51 [==============================] - 27s 524ms/step - loss: 0.4417 - accuracy: 0.8593 - val_loss: 3.2129 - val_accuracy: 0.3278 Epoch 39/100 51/51 [==============================] - ETA: 0s - loss: 0.4303 - accuracy: 0.8667 Epoch 39: val_accuracy did not improve from 0.35556 51/51 [==============================] - 28s 539ms/step - loss: 0.4303 - accuracy: 0.8667 - val_loss: 3.3287 - val_accuracy: 0.3389 Epoch 40/100 51/51 [==============================] - ETA: 0s - loss: 0.3890 - accuracy: 0.8852 Epoch 40: val_accuracy improved from 0.35556 to 0.36111, saving model to best_model.h5 51/51 [==============================] - 28s 550ms/step - loss: 0.3890 - accuracy: 0.8852 - val_loss: 3.3360 - val_accuracy: 0.3611 Epoch 41/100 51/51 [==============================] - ETA: 0s - loss: 0.3794 - accuracy: 0.8778 Epoch 41: val_accuracy did not improve from 0.36111 51/51 [==============================] - 27s 521ms/step - loss: 0.3794 - accuracy: 0.8778 - val_loss: 3.4751 - val_accuracy: 0.3444 Epoch 42/100 51/51 [==============================] - ETA: 0s - loss: 0.3582 - accuracy: 0.8889 Epoch 42: val_accuracy did not improve from 0.36111 51/51 [==============================] - 27s 522ms/step - loss: 0.3582 - accuracy: 0.8889 - val_loss: 3.4132 - val_accuracy: 0.3500 Epoch 43/100 51/51 [==============================] - ETA: 0s - loss: 0.3466 - accuracy: 0.8957 Epoch 43: val_accuracy did not improve from 0.36111 51/51 [==============================] - 27s 532ms/step - loss: 0.3466 - accuracy: 0.8957 - val_loss: 3.4640 - val_accuracy: 0.3500 Epoch 44/100 51/51 [==============================] - ETA: 0s - loss: 0.3460 - accuracy: 0.8981 Epoch 44: val_accuracy did not improve from 0.36111 51/51 [==============================] - 26s 520ms/step - loss: 0.3460 - accuracy: 0.8981 - val_loss: 3.4791 - val_accuracy: 0.3500 Epoch 45/100 51/51 [==============================] - ETA: 0s - loss: 0.3422 - accuracy: 0.8963 Epoch 45: val_accuracy did not improve from 0.36111 51/51 [==============================] - 26s 518ms/step - loss: 0.3422 - accuracy: 0.8963 - val_loss: 3.6242 - val_accuracy: 0.3389 Epoch 46/100 51/51 [==============================] - ETA: 0s - loss: 0.3332 - accuracy: 0.8963 Epoch 46: val_accuracy did not improve from 0.36111 51/51 [==============================] - 26s 520ms/step - loss: 0.3332 - accuracy: 0.8963 - val_loss: 3.5350 - val_accuracy: 0.3278 Epoch 47/100 51/51 [==============================] - ETA: 0s - loss: 0.2831 - accuracy: 0.9117 Epoch 47: val_accuracy did not improve from 0.36111 51/51 [==============================] - 26s 513ms/step - loss: 0.2831 - accuracy: 0.9117 - val_loss: 3.6800 - val_accuracy: 0.3389 Epoch 48/100 51/51 [==============================] - ETA: 0s - loss: 0.3205 - accuracy: 0.9012 Epoch 48: val_accuracy did not improve from 0.36111 51/51 [==============================] - 26s 509ms/step - loss: 0.3205 - accuracy: 0.9012 - val_loss: 3.7215 - val_accuracy: 0.3500 Epoch 49/100 51/51 [==============================] - ETA: 0s - loss: 0.2655 - accuracy: 0.9222 Epoch 49: val_accuracy did not improve from 0.36111 51/51 [==============================] - 26s 518ms/step - loss: 0.2655 - accuracy: 0.9222 - val_loss: 3.7795 - val_accuracy: 0.3278 Epoch 50/100 51/51 [==============================] - ETA: 0s - loss: 0.2691 - accuracy: 0.9191 Epoch 50: val_accuracy did not improve from 0.36111 51/51 [==============================] - 26s 517ms/step - loss: 0.2691 - accuracy: 0.9191 - val_loss: 3.7258 - val_accuracy: 0.3611 Epoch 51/100 51/51 [==============================] - ETA: 0s - loss: 0.2641 - accuracy: 0.9179 Epoch 51: val_accuracy did not improve from 0.36111 51/51 [==============================] - 26s 516ms/step - loss: 0.2641 - accuracy: 0.9179 - val_loss: 3.7991 - val_accuracy: 0.3444 Epoch 52/100 51/51 [==============================] - ETA: 0s - loss: 0.2464 - accuracy: 0.9309 Epoch 52: val_accuracy did not improve from 0.36111 51/51 [==============================] - 27s 538ms/step - loss: 0.2464 - accuracy: 0.9309 - val_loss: 3.7702 - val_accuracy: 0.3333 Epoch 53/100 51/51 [==============================] - ETA: 0s - loss: 0.2534 - accuracy: 0.9160 Epoch 53: val_accuracy did not improve from 0.36111 51/51 [==============================] - 27s 529ms/step - loss: 0.2534 - accuracy: 0.9160 - val_loss: 3.7944 - val_accuracy: 0.3222 Epoch 54/100 51/51 [==============================] - ETA: 0s - loss: 0.2317 - accuracy: 0.9309 Epoch 54: val_accuracy did not improve from 0.36111 51/51 [==============================] - 26s 513ms/step - loss: 0.2317 - accuracy: 0.9309 - val_loss: 3.7949 - val_accuracy: 0.3389 Epoch 55/100 51/51 [==============================] - ETA: 0s - loss: 0.2455 - accuracy: 0.9253 Epoch 55: val_accuracy did not improve from 0.36111 51/51 [==============================] - 26s 508ms/step - loss: 0.2455 - accuracy: 0.9253 - val_loss: 3.7746 - val_accuracy: 0.3500 Epoch 56/100 51/51 [==============================] - ETA: 0s - loss: 0.2255 - accuracy: 0.9290 Epoch 56: val_accuracy did not improve from 0.36111 51/51 [==============================] - 26s 510ms/step - loss: 0.2255 - accuracy: 0.9290 - val_loss: 3.8862 - val_accuracy: 0.3278 Epoch 57/100 51/51 [==============================] - ETA: 0s - loss: 0.2320 - accuracy: 0.9309 Epoch 57: val_accuracy did not improve from 0.36111 51/51 [==============================] - 26s 512ms/step - loss: 0.2320 - accuracy: 0.9309 - val_loss: 3.8327 - val_accuracy: 0.3333 Epoch 58/100 51/51 [==============================] - ETA: 0s - loss: 0.2219 - accuracy: 0.9352 Epoch 58: val_accuracy did not improve from 0.36111 51/51 [==============================] - 27s 540ms/step - loss: 0.2219 - accuracy: 0.9352 - val_loss: 3.9531 - val_accuracy: 0.3333 Epoch 59/100 51/51 [==============================] - ETA: 0s - loss: 0.2075 - accuracy: 0.9383 Epoch 59: val_accuracy did not improve from 0.36111 51/51 [==============================] - 28s 555ms/step - loss: 0.2075 - accuracy: 0.9383 - val_loss: 3.9048 - val_accuracy: 0.3556 Epoch 60/100 51/51 [==============================] - ETA: 0s - loss: 0.2116 - accuracy: 0.9346 Epoch 60: val_accuracy did not improve from 0.36111 51/51 [==============================] - 29s 565ms/step - loss: 0.2116 - accuracy: 0.9346 - val_loss: 3.9443 - val_accuracy: 0.3222 Epoch 60: early stopping
五、模型评估
1. Loss与Accuracy图
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(len(loss))
plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
2.指定图片预测
# 加载效果最好的模型权重
model.load_weights('best_model.h5')
from PIL import Image
import numpy as np
img = Image.open("./face/Jennifer Lawrence/003_963a3627.jpg") #这里选择你需要预测的图片
image = tf.image.resize(img, [img_height, img_width])
img_array = tf.expand_dims(image, 0)
predictions = model.predict(img_array) # 这里选用你已经训练好的模型
print("预测结果为:",class_names[np.argmax(predictions)])
1/1 [==============================] - 0s 122ms/step 预测结果为: Jennifer Lawrence