● 文为「365天深度学习训练营」内部文章
● 参考本文所写文章,请在文章开头带上「🔗 声明」
好莱坞明星识别
一、前期工作
我的环境:
● 语言环境:Python3.8.8
● 编译器:jupyter notebook
● 深度学习框架:TensorFlow2.10.1
● 显卡(GPU):NVIDIA GeForce RTX 3080
1.设置GPU
from tensorflow import keras
from tensorflow.keras import layers,models
import os, PIL, pathlib
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
gpus = tf.config.list_physical_devices("GPU")
if gpus:
gpu0 = gpus[0] #如果有多个GPU,仅使用第0个GPU
tf.config.experimental.set_memory_growth(gpu0, True) #设置GPU显存用量按需使用
tf.config.set_visible_devices([gpu0],"GPU")
gpus
2.导入数据
data_dir = "./48-data/"
data_dir = pathlib.Path(data_dir)
3.查看数据
image_count = len(list(data_dir.glob('*/*.jpg')))
print("图片总数为:",image_count)
roses = list(data_dir.glob('Jennifer Lawrence/*.jpg'))
PIL.Image.open(str(roses[0]))
二、数据预处理
1.加载数据
batch_size = 32
img_height = 224
img_width = 224
label_mode:
● int:标签将被编码成整数(使用的损失函数应为:sparse_categorical_crossentropy loss)。
● categorical:标签将被编码为分类向量(使用的损失函数应为:categorical_crossentropy loss)。
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
data_dir,
validation_split=0.1,
subset="training",
label_mode = "categorical",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
data_dir,
validation_split=0.1,
subset="validation",
label_mode = "categorical",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
我们可以通过class_names输出数据集的标签。标签将按字母顺序对应于目录名称。
class_names = train_ds.class_names
print(class_names)
2.可视化数据
#2.可视化数据
plt.figure(figsize=(20, 10))
for images, labels in train_ds.take(1):
for i in range(20):
ax = plt.subplot(5, 10, i + 1)
plt.imshow(images[i].numpy().astype("uint8"))
plt.title(class_names[np.argmax(labels[i])])
plt.axis("off")
3.再次检查数据
for image_batch, labels_batch in train_ds:
print(image_batch.shape)
print(labels_batch.shape)
break
4.配置数据集
● shuffle() :打乱数据,关于此函数的详细介绍可以参考:https://zhuanlan.zhihu.com/p/42417456
● prefetch() :预取数据,加速运行
#4.配置数据集
AUTOTUNE = tf.data.AUTOTUNE
train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
三、构建CNN网络
卷积神经网络(CNN)的输入是张量 (Tensor) 形式的 (image_height, image_width, color_channels),包含了图像高度、宽度及颜色信息。不需要输入batch size。color_channels 为 (R,G,B) 分别对应 RGB 的三个颜色通道(color channel)。在此示例中,我们的 CNN 输入的形状是 (224, 224, 3)即彩色图像。我们需要在声明第一层时将形状赋值给参数input_shape。
"""
关于卷积核的计算不懂的可以参考文章:https://blog.csdn.net/qq_38251616/article/details/114278995
layers.Dropout(0.4) 作用是防止过拟合,提高模型的泛化能力。
关于Dropout层的更多介绍可以参考文章:https://mtyjkh.blog.csdn.net/article/details/115826689
"""
model = models.Sequential([
layers.experimental.preprocessing.Rescaling(1./255, input_shape=(img_height, img_width, 3)),
layers.Conv2D(16, (3, 3), activation='relu', input_shape=(img_height, img_width, 3)), # 卷积层1,卷积核3*3
layers.AveragePooling2D((2, 2)), # 池化层1,2*2采样
layers.Conv2D(32, (3, 3), activation='relu'), # 卷积层2,卷积核3*3
layers.AveragePooling2D((2, 2)), # 池化层2,2*2采样
layers.Dropout(0.5),
layers.Conv2D(64, (3, 3), activation='relu'), # 卷积层3,卷积核3*3
layers.AveragePooling2D((2, 2)),
layers.Dropout(0.5),
layers.Conv2D(128, (3, 3), activation='relu'), # 卷积层3,卷积核3*3
layers.Dropout(0.5),
layers.Flatten(), # Flatten层,连接卷积层与全连接层
layers.Dense(128, activation='relu'), # 全连接层,特征进一步提取
layers.Dense(len(class_names)) # 输出层,输出预期结果
])
model.summary() # 打印网络结构
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
rescaling (Rescaling) (None, 224, 224, 3) 0
_________________________________________________________________
conv2d (Conv2D) (None, 222, 222, 16) 448
_________________________________________________________________
average_pooling2d (AveragePo (None, 111, 111, 16) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 109, 109, 32) 4640
_________________________________________________________________
average_pooling2d_1 (Average (None, 54, 54, 32) 0
_________________________________________________________________
dropout (Dropout) (None, 54, 54, 32) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 52, 52, 64) 18496
_________________________________________________________________
average_pooling2d_2 (Average (None, 26, 26, 64) 0
_________________________________________________________________
dropout_1 (Dropout) (None, 26, 26, 64) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 24, 24, 128) 73856
_________________________________________________________________
dropout_2 (Dropout) (None, 24, 24, 128) 0
_________________________________________________________________
flatten (Flatten) (None, 73728) 0
_________________________________________________________________
dense (Dense) (None, 128) 9437312
_________________________________________________________________
dense_1 (Dense) (None, 17) 2193
=================================================================
Total params: 9,536,945
Trainable params: 9,536,945
Non-trainable params: 0
_________________________________________________________________
四、训练模型
在准备对模型进行训练之前,还需要再对其进行一些设置。以下内容是在模型的编译步骤中添加的:
● 损失函数(loss):用于衡量模型在训练期间的准确率。
● 优化器(optimizer):决定模型如何根据其看到的数据和自身的损失函数进行更新。
● 指标(metrics):用于监控训练和测试步骤。以下示例使用了准确率,即被正确分类的图像的比率。
1.设置动态学习率
📮 ExponentialDecay函数:
tf.keras.optimizers.schedules.ExponentialDecay是 TensorFlow 中的一个学习率衰减策略,用于在训练神经网络时动态地降低学习率。学习率衰减是一种常用的技巧,可以帮助优化算法更有效地收敛到全局最小值,从而提高模型的性能。
🔎 主要参数:
● initial_learning_rate(初始学习率):初始学习率大小。
● decay_steps(衰减步数):学习率衰减的步数。在经过 decay_steps 步后,学习率将按照指数函数衰减。例如,如果 decay_steps 设置为 10,则每10步衰减一次。
● decay_rate(衰减率):学习率的衰减率。它决定了学习率如何衰减。通常,取值在 0 到 1 之间。
● staircase(阶梯式衰减):一个布尔值,控制学习率的衰减方式。如果设置为 True,则学习率在每个 decay_steps 步之后直接减小,形成阶梯状下降。如果设置为 False,则学习率将连续衰减。
# 设置初始学习率
initial_learning_rate = 1e-4
lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate,
decay_steps=60, # 敲黑板!!!这里是指 steps,不是指epochs
decay_rate=0.96, # lr经过一次衰减就会变成 decay_rate*lr
staircase=True)
# 将指数衰减学习率送入优化器
optimizer = tf.keras.optimizers.Adam(learning_rate=lr_schedule)
model.compile(optimizer=optimizer,
loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
损失函数Loss详解:
- binary_crossentropy(对数损失函数)
与 sigmoid 相对应的损失函数,针对于二分类问题。
- categorical_crossentropy(多分类的对数损失函数)
与 softmax 相对应的损失函数,如果是one-hot编码,则使用 categorical_crossentropy
2.早停与保存最佳模型参数
损失函数Loss详解:
- binary_crossentropy(对数损失函数)
与 sigmoid 相对应的损失函数,针对于二分类问题。
- categorical_crossentropy(多分类的对数损失函数)
与 softmax 相对应的损失函数,如果是one-hot编码,则使用 categorical_crossentropy
EarlyStopping()参数说明:
● monitor: 被监测的数据。
● min_delta: 在被监测的数据中被认为是提升的最小变化, 例如,小于 min_delta 的绝对变化会被认为没有提升。
● patience: 没有进步的训练轮数,在这之后训练就会被停止。
● verbose: 详细信息模式。
● mode: {auto, min, max} 其中之一。 在 min 模式中, 当被监测的数据停止下降,训练就会停止;在 max 模式中,当被监测的数据停止上升,训练就会停止;在 auto 模式中,方向会自动从被监测的数据的名字中判断出来。
● baseline: 要监控的数量的基准值。 如果模型没有显示基准的改善,训练将停止。
● estore_best_weights: 是否从具有监测数量的最佳值的时期恢复模型权重。 如果为 False,则使用在训练的最后一步获得的模型权重。
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping
epochs = 100
# 保存最佳模型参数
checkpointer = ModelCheckpoint('best_model.h5',
monitor='val_accuracy',
verbose=1,
save_best_only=True,
save_weights_only=True)
# 设置早停
earlystopper = EarlyStopping(monitor='val_accuracy',
min_delta=0.001,
patience=20,
verbose=1)
3.模型训练
history = model.fit(train_ds,
validation_data=val_ds,
epochs=epochs,
callbacks=[checkpointer, earlystopper])
Epoch 1/100
50/51 [============================>.] - ETA: 0s - loss: 0.1421 - accuracy: 0.9654
Epoch 1: val_accuracy improved from -inf to 0.36111, saving model to best_model.h5
51/51 [==============================] - 3s 50ms/step - loss: 0.1412 - accuracy: 0.9654 - val_loss: 3.7582 - val_accuracy: 0.3611
Epoch 2/100
51/51 [==============================] - ETA: 0s - loss: 0.1355 - accuracy: 0.9704
Epoch 2: val_accuracy improved from 0.36111 to 0.37222, saving model to best_model.h5
51/51 [==============================] - 2s 49ms/step - loss: 0.1355 - accuracy: 0.9704 - val_loss: 3.7401 - val_accuracy: 0.3722
Epoch 3/100
51/51 [==============================] - ETA: 0s - loss: 0.1329 - accuracy: 0.9648
Epoch 3: val_accuracy did not improve from 0.37222
51/51 [==============================] - 2s 46ms/step - loss: 0.1329 - accuracy: 0.9648 - val_loss: 3.7600 - val_accuracy: 0.3722
Epoch 4/100
51/51 [==============================] - ETA: 0s - loss: 0.1228 - accuracy: 0.9728
Epoch 4: val_accuracy did not improve from 0.37222
51/51 [==============================] - 2s 47ms/step - loss: 0.1228 - accuracy: 0.9728 - val_loss: 3.7345 - val_accuracy: 0.3722
Epoch 5/100
51/51 [==============================] - ETA: 0s - loss: 0.1394 - accuracy: 0.9605
Epoch 5: val_accuracy did not improve from 0.37222
51/51 [==============================] - 2s 48ms/step - loss: 0.1394 - accuracy: 0.9605 - val_loss: 3.7784 - val_accuracy: 0.3722
Epoch 6/100
51/51 [==============================] - ETA: 0s - loss: 0.1412 - accuracy: 0.9654
Epoch 6: val_accuracy did not improve from 0.37222
51/51 [==============================] - 2s 46ms/step - loss: 0.1412 - accuracy: 0.9654 - val_loss: 3.8007 - val_accuracy: 0.3722
Epoch 7/100
51/51 [==============================] - ETA: 0s - loss: 0.1338 - accuracy: 0.9642
Epoch 7: val_accuracy improved from 0.37222 to 0.37778, saving model to best_model.h5
51/51 [==============================] - 2s 48ms/step - loss: 0.1338 - accuracy: 0.9642 - val_loss: 3.8068 - val_accuracy: 0.3778
Epoch 8/100
51/51 [==============================] - ETA: 0s - loss: 0.1485 - accuracy: 0.9574
Epoch 8: val_accuracy did not improve from 0.37778
51/51 [==============================] - 2s 47ms/step - loss: 0.1485 - accuracy: 0.9574 - val_loss: 3.7852 - val_accuracy: 0.3667
Epoch 9/100
51/51 [==============================] - ETA: 0s - loss: 0.1301 - accuracy: 0.9691
Epoch 9: val_accuracy did not improve from 0.37778
51/51 [==============================] - 2s 47ms/step - loss: 0.1301 - accuracy: 0.9691 - val_loss: 3.7765 - val_accuracy: 0.3667
Epoch 10/100
51/51 [==============================] - ETA: 0s - loss: 0.1502 - accuracy: 0.9623
Epoch 10: val_accuracy did not improve from 0.37778
51/51 [==============================] - 2s 47ms/step - loss: 0.1502 - accuracy: 0.9623 - val_loss: 3.7906 - val_accuracy: 0.3722
Epoch 11/100
51/51 [==============================] - ETA: 0s - loss: 0.1243 - accuracy: 0.9667
Epoch 11: val_accuracy did not improve from 0.37778
51/51 [==============================] - 2s 47ms/step - loss: 0.1243 - accuracy: 0.9667 - val_loss: 3.8046 - val_accuracy: 0.3722
Epoch 12/100
51/51 [==============================] - ETA: 0s - loss: 0.1201 - accuracy: 0.9710
Epoch 12: val_accuracy did not improve from 0.37778
51/51 [==============================] - 2s 47ms/step - loss: 0.1201 - accuracy: 0.9710 - val_loss: 3.7810 - val_accuracy: 0.3778
Epoch 13/100
51/51 [==============================] - ETA: 0s - loss: 0.1242 - accuracy: 0.9667
Epoch 13: val_accuracy did not improve from 0.37778
51/51 [==============================] - 2s 49ms/step - loss: 0.1242 - accuracy: 0.9667 - val_loss: 3.8090 - val_accuracy: 0.3722
Epoch 14/100
51/51 [==============================] - ETA: 0s - loss: 0.1313 - accuracy: 0.9654
Epoch 14: val_accuracy did not improve from 0.37778
51/51 [==============================] - 2s 49ms/step - loss: 0.1313 - accuracy: 0.9654 - val_loss: 3.8195 - val_accuracy: 0.3667
Epoch 15/100
51/51 [==============================] - ETA: 0s - loss: 0.1196 - accuracy: 0.9716
Epoch 15: val_accuracy did not improve from 0.37778
51/51 [==============================] - 2s 47ms/step - loss: 0.1196 - accuracy: 0.9716 - val_loss: 3.8146 - val_accuracy: 0.3667
Epoch 16/100
51/51 [==============================] - ETA: 0s - loss: 0.1263 - accuracy: 0.9642
Epoch 16: val_accuracy did not improve from 0.37778
51/51 [==============================] - 2s 49ms/step - loss: 0.1263 - accuracy: 0.9642 - val_loss: 3.7870 - val_accuracy: 0.3667
Epoch 17/100
51/51 [==============================] - ETA: 0s - loss: 0.1504 - accuracy: 0.9562
Epoch 17: val_accuracy did not improve from 0.37778
51/51 [==============================] - 3s 49ms/step - loss: 0.1504 - accuracy: 0.9562 - val_loss: 3.8275 - val_accuracy: 0.3667
Epoch 18/100
51/51 [==============================] - ETA: 0s - loss: 0.1315 - accuracy: 0.9611
Epoch 18: val_accuracy did not improve from 0.37778
51/51 [==============================] - 2s 47ms/step - loss: 0.1315 - accuracy: 0.9611 - val_loss: 3.8288 - val_accuracy: 0.3778
Epoch 19/100
51/51 [==============================] - ETA: 0s - loss: 0.1326 - accuracy: 0.9630
Epoch 19: val_accuracy did not improve from 0.37778
51/51 [==============================] - 2s 48ms/step - loss: 0.1326 - accuracy: 0.9630 - val_loss: 3.8292 - val_accuracy: 0.3722
Epoch 20/100
51/51 [==============================] - ETA: 0s - loss: 0.1431 - accuracy: 0.9605
Epoch 20: val_accuracy did not improve from 0.37778
51/51 [==============================] - 2s 48ms/step - loss: 0.1431 - accuracy: 0.9605 - val_loss: 3.8238 - val_accuracy: 0.3722
Epoch 21/100
51/51 [==============================] - ETA: 0s - loss: 0.1302 - accuracy: 0.9679
Epoch 21: val_accuracy did not improve from 0.37778
51/51 [==============================] - 3s 49ms/step - loss: 0.1302 - accuracy: 0.9679 - val_loss: 3.8227 - val_accuracy: 0.3722
Epoch 22/100
51/51 [==============================] - ETA: 0s - loss: 0.1262 - accuracy: 0.9685
Epoch 22: val_accuracy did not improve from 0.37778
51/51 [==============================] - 3s 50ms/step - loss: 0.1262 - accuracy: 0.9685 - val_loss: 3.8172 - val_accuracy: 0.3778
Epoch 23/100
51/51 [==============================] - ETA: 0s - loss: 0.1291 - accuracy: 0.9698
Epoch 23: val_accuracy did not improve from 0.37778
51/51 [==============================] - 2s 47ms/step - loss: 0.1291 - accuracy: 0.9698 - val_loss: 3.8271 - val_accuracy: 0.3778
Epoch 24/100
51/51 [==============================] - ETA: 0s - loss: 0.1212 - accuracy: 0.9710
Epoch 24: val_accuracy did not improve from 0.37778
51/51 [==============================] - 2s 47ms/step - loss: 0.1212 - accuracy: 0.9710 - val_loss: 3.8194 - val_accuracy: 0.3778
Epoch 25/100
50/51 [============================>.] - ETA: 0s - loss: 0.1338 - accuracy: 0.9660
Epoch 25: val_accuracy did not improve from 0.37778
51/51 [==============================] - 2s 47ms/step - loss: 0.1325 - accuracy: 0.9667 - val_loss: 3.8270 - val_accuracy: 0.3778
Epoch 26/100
51/51 [==============================] - ETA: 0s - loss: 0.1057 - accuracy: 0.9753
Epoch 26: val_accuracy did not improve from 0.37778
51/51 [==============================] - 2s 47ms/step - loss: 0.1057 - accuracy: 0.9753 - val_loss: 3.8331 - val_accuracy: 0.3778
Epoch 27/100
51/51 [==============================] - ETA: 0s - loss: 0.1316 - accuracy: 0.9654
Epoch 27: val_accuracy did not improve from 0.37778
51/51 [==============================] - 2s 47ms/step - loss: 0.1316 - accuracy: 0.9654 - val_loss: 3.8394 - val_accuracy: 0.3778
Epoch 27: early stopping
五、模型评估
1.Loss与Accuracy图
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(len(loss))
plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
2.指定图片进行预测
# 加载效果最好的模型权重
model.load_weights('best_model.h5')
from PIL import Image
import numpy as np
img = Image.open("./48-data/Jennifer Lawrence/003_963a3627.jpg") #这里选择你需要预测的图片
image = tf.image.resize(img, [img_height, img_width])
img_array = tf.expand_dims(image, 0)
predictions = model.predict(img_array) # 这里选用你已经训练好的模型
print("预测结果为:",class_names[np.argmax(predictions)])
这个地方我在复现的时候会出现错误,这个错误的原因是 TensorFlow 试图将一个 PIL.Image 对象直接转换为一个 Tensor,而 tf.image.resize() 需要的是一个 Tensor 类型的输入。PIL.Image 类型需要先转换成一个 NumPy 数组或 TensorFlow Tensor 才能进行处理。
解决方法:
你需要先将 PIL.Image 对象转换为 NumPy 数组,或者使用 tf.convert_to_tensor() 将其转换为 Tensor。
另外,如果你还没有定义 img_height 和 img_width,需要指定它们的值。
from PIL import Image
import numpy as np
import tensorflow as tf
# 加载图片
img = Image.open("./data6/Jennifer Lawrence/003_963a3627.jpg") # 这里选择你需要预测的图片
# 将图片转换为NumPy数组(或直接转换为Tensor)
img_array = np.array(img)
# 如果需要调整图片大小,使用 tf.image.resize
image = tf.image.resize(img_array, [img_height, img_width])
# 增加一个维度以符合模型的输入要求 (batch size = 1)
img_array = tf.expand_dims(image, 0)
# 使用训练好的模型进行预测
predictions = model.predict(img_array)
# 打印预测结果
print("预测结果为:", class_names[np.argmax(predictions)])
六、总结
预测那部分我在复现的时候会出现错误,这个错误的原因是 TensorFlow 试图将一个 PIL.Image 对象直接转换为一个 Tensor,而 tf.image.resize() 需要的是一个 Tensor 类型的输入。PIL.Image 类型需要先转换成一个 NumPy 数组或 TensorFlow Tensor 才能进行处理。
解决方法:
你需要先将 PIL.Image 对象转换为 NumPy 数组,或者使用 tf.convert_to_tensor() 将其转换为 Tensor。
另外,如果你还没有定义 img_height 和 img_width,需要指定它们的值