5天学习计划 第6周:好莱坞明星识别

本文为🔗365天深度学习训练营 内部限免文章
参考本文所写记录性文章,请在文章开头注明以下内容,复制粘贴即可

>- **🍨 本文为[🔗365天深度学习训练营](https://mp.weixin.qq.com/s/k-vYaC8l7uxX51WoypLkTw) 中的学习记录博客**
>- **🍦 参考文章:365天深度学习训练营-6周:好莱坞明星识别(训练营内部成员可读)**
>- **🍖 原作者:[K同学啊](https://mtyjkh.blog.csdn.net/)**

一、前期工作

本文为5天学习计划分享版,更完善的内容可在365天深度学习训练营中获取

我的环境:

  • 语言环境:Python3.6.5
  • 编译器:jupyter notebook
  • 深度学习框架:TensorFlow2.4.1
  • 显卡(GPU):NVIDIA GeForce RTX 3080
  • 数据集:公众号(K同学啊)回复DL+48

1. 设置GPU

如果使用的是CPU可以忽略这步

from tensorflow       import keras
from tensorflow.keras import layers,models
import os, PIL, pathlib
import matplotlib.pyplot as plt
import tensorflow        as tf
import numpy             as np

gpus = tf.config.list_physical_devices("GPU")

if gpus:
    gpu0 = gpus[0]                                        #如果有多个GPU,仅使用第0个GPU
    tf.config.experimental.set_memory_growth(gpu0, True)  #设置GPU显存用量按需使用
    tf.config.set_visible_devices([gpu0],"GPU")
    
gpus
[]

2. 导入数据

data_dir = "./48-data/"

data_dir = pathlib.Path(data_dir)

3. 查看数据

image_count = len(list(data_dir.glob('*/*.jpg')))

print("图片总数为:",image_count)
图片总数为: 1800
roses = list(data_dir.glob('Jennifer Lawrence/*.jpg'))
PIL.Image.open(str(roses[0]))

在这里插入图片描述

二、数据预处理

1. 加载数据

使用image_dataset_from_directory方法将磁盘中的数据加载到tf.data.Dataset

测试集与验证集的关系:

  1. 验证集并没有参与训练过程梯度下降过程的,狭义上来讲是没有参与模型的参数训练更新的。
  2. 但是广义上来讲,验证集存在的意义确实参与了一个“人工调参”的过程,我们根据每一个epoch训练之后模型在valid data上的表现来决定是否需要训练进行early stop,或者根据这个过程模型的性能变化来调整模型的超参数,如学习率,batch_size等等。
  3. 因此,我们也可以认为,验证集也参与了训练,但是并没有使得模型去overfit验证集
batch_size = 32
img_height = 224
img_width = 224

label_mode:

  • int:标签将被编码成整数(使用的损失函数应为:sparse_categorical_crossentropy loss)。
  • categorical:标签将被编码为分类向量(使用的损失函数应为:categorical_crossentropy loss)。
"""
关于image_dataset_from_directory()的详细介绍可以参考文章:https://mtyjkh.blog.csdn.net/article/details/117018789
"""
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
    data_dir,
    validation_split=0.1,
    subset="training",
    label_mode = "categorical",
    seed=123,
    image_size=(img_height, img_width),
    batch_size=batch_size)
Found 1800 files belonging to 17 classes.
Using 1620 files for training.
"""
关于image_dataset_from_directory()的详细介绍可以参考文章:https://mtyjkh.blog.csdn.net/article/details/117018789
"""
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
    data_dir,
    validation_split=0.1,
    subset="validation",
    label_mode = "categorical",
    seed=123,
    image_size=(img_height, img_width),
    batch_size=batch_size)
Found 1800 files belonging to 17 classes.
Using 180 files for validation.

我们可以通过class_names输出数据集的标签。标签将按字母顺序对应于目录名称。

class_names = train_ds.class_names
print(class_names)
['Angelina Jolie', 'Brad Pitt', 'Denzel Washington', 'Hugh Jackman', 'Jennifer Lawrence', 'Johnny Depp', 'Kate Winslet', 'Leonardo DiCaprio', 'Megan Fox', 'Natalie Portman', 'Nicole Kidman', 'Robert Downey Jr', 'Sandra Bullock', 'Scarlett Johansson', 'Tom Cruise', 'Tom Hanks', 'Will Smith']

2. 可视化数据

plt.figure(figsize=(20, 10))

for images, labels in train_ds.take(1):
    for i in range(20):
        ax = plt.subplot(5, 10, i + 1)

        plt.imshow(images[i].numpy().astype("uint8"))
        plt.title(class_names[np.argmax(labels[i])])
        
        plt.axis("off")

在这里插入图片描述

3. 再次检查数据

for image_batch, labels_batch in train_ds:
    print(image_batch.shape)
    print(labels_batch.shape)
    break
(32, 224, 224, 3)
(32, 17)
  • Image_batch是形状的张量(32,224,224,3)。这是一批形状224x224x3的32张图片(最后一维指的是彩色通道RGB)。
  • Label_batch是形状(32,)的张量,这些标签对应32张图片

4. 配置数据集

  • shuffle() :打乱数据,关于此函数的详细介绍可以参考:https://zhuanlan.zhihu.com/p/42417456
  • prefetch() :预取数据,加速运行

prefetch()功能详细介绍:CPU 正在准备数据时,加速器处于空闲状态。相反,当加速器正在训练模型时,CPU 处于空闲状态。因此,训练所用的时间是 CPU 预处理时间和加速器训练时间的总和。prefetch()将训练步骤的预处理和模型执行过程重叠到一起。当加速器正在执行第 N 个训练步时,CPU 正在准备第 N+1 步的数据。这样做不仅可以最大限度地缩短训练的单步用时(而不是总用时),而且可以缩短提取和转换数据所需的时间。如果不使用prefetch(),CPU 和 GPU/TPU 在大部分时间都处于空闲状态:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-r4mVRDAJ-1662121869219)(attachment:image-2.png)]

使用prefetch()可显著减少空闲时间:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-KCaA9YX0-1662121869219)(attachment:image.png)]

  • cache() :将数据集缓存到内存当中,加速运行
AUTOTUNE = tf.data.AUTOTUNE

train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)

三、构建CNN网络

卷积神经网络(CNN)的输入是张量 (Tensor) 形式的 (image_height, image_width, color_channels),包含了图像高度、宽度及颜色信息。不需要输入batch size。color_channels 为 (R,G,B) 分别对应 RGB 的三个颜色通道(color channel)。在此示例中,我们的 CNN 输入的形状是 (224, 224, 4)即彩色图像。我们需要在声明第一层时将形状赋值给参数input_shape

"""
关于卷积核的计算不懂的可以参考文章:https://blog.csdn.net/qq_38251616/article/details/114278995

layers.Dropout(0.4) 作用是防止过拟合,提高模型的泛化能力。
关于Dropout层的更多介绍可以参考文章:https://mtyjkh.blog.csdn.net/article/details/115826689
"""

model = models.Sequential([
    layers.experimental.preprocessing.Rescaling(1./255, input_shape=(img_height, img_width, 3)),
    
    layers.Conv2D(16, (3, 3), activation='relu', input_shape=(img_height, img_width, 3)), # 卷积层1,卷积核3*3  
    layers.AveragePooling2D((2, 2)),               # 池化层1,2*2采样
    layers.Conv2D(32, (3, 3), activation='relu'),  # 卷积层2,卷积核3*3
    layers.AveragePooling2D((2, 2)),               # 池化层2,2*2采样
    layers.Dropout(0.5),  
    layers.Conv2D(64, (3, 3), activation='relu'),  # 卷积层3,卷积核3*3
    layers.AveragePooling2D((2, 2)),     
    layers.Dropout(0.5),  
    layers.Conv2D(128, (3, 3), activation='relu'),  # 卷积层3,卷积核3*3
    layers.Dropout(0.5), 
    
    layers.Flatten(),                       # Flatten层,连接卷积层与全连接层
    layers.Dense(128, activation='relu'),   # 全连接层,特征进一步提取
    layers.Dense(len(class_names))               # 输出层,输出预期结果
])

model.summary()  # 打印网络结构
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
rescaling (Rescaling)        (None, 224, 224, 3)       0         
_________________________________________________________________
conv2d (Conv2D)              (None, 222, 222, 16)      448       
_________________________________________________________________
average_pooling2d (AveragePo (None, 111, 111, 16)      0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 109, 109, 32)      4640      
_________________________________________________________________
average_pooling2d_1 (Average (None, 54, 54, 32)        0         
_________________________________________________________________
dropout (Dropout)            (None, 54, 54, 32)        0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 52, 52, 64)        18496     
_________________________________________________________________
average_pooling2d_2 (Average (None, 26, 26, 64)        0         
_________________________________________________________________
dropout_1 (Dropout)          (None, 26, 26, 64)        0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 24, 24, 128)       73856     
_________________________________________________________________
dropout_2 (Dropout)          (None, 24, 24, 128)       0         
_________________________________________________________________
flatten (Flatten)            (None, 73728)             0         
_________________________________________________________________
dense (Dense)                (None, 128)               9437312   
_________________________________________________________________
dense_1 (Dense)              (None, 17)                2193      
=================================================================
Total params: 9,536,945
Trainable params: 9,536,945
Non-trainable params: 0
_________________________________________________________________

四、训练模型

在准备对模型进行训练之前,还需要再对其进行一些设置。以下内容是在模型的编译步骤中添加的:

  • 损失函数(loss):用于衡量模型在训练期间的准确率。
  • 优化器(optimizer):决定模型如何根据其看到的数据和自身的损失函数进行更新。
  • 指标(metrics):用于监控训练和测试步骤。以下示例使用了准确率,即被正确分类的图像的比率。
# 将指数衰减学习率送入优化器
optimizer = tf.keras.optimizers.Adam()

model.compile(optimizer=optimizer,
              loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True),
              metrics=['accuracy'])
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping

epochs = 100

# 保存最佳模型参数
checkpointer = ModelCheckpoint('best_model.h5',
                                monitor='val_accuracy',
                                verbose=1,
                                save_best_only=True,
                                save_weights_only=True)
history = model.fit(train_ds,
                    validation_data=val_ds,
                    epochs=50)
Epoch 1/50
51/51 [==============================] - 5s 20ms/step - loss: 2.8329 - accuracy: 0.0895 - val_loss: 2.7887 - val_accuracy: 0.1389
Epoch 2/50
51/51 [==============================] - 1s 12ms/step - loss: 2.7283 - accuracy: 0.1296 - val_loss: 2.6685 - val_accuracy: 0.1889
Epoch 3/50
51/51 [==============================] - 1s 12ms/step - loss: 2.5299 - accuracy: 0.1914 - val_loss: 2.5374 - val_accuracy: 0.1611
Epoch 4/50
51/51 [==============================] - 1s 12ms/step - loss: 2.3337 - accuracy: 0.2469 - val_loss: 2.4386 - val_accuracy: 0.2333
Epoch 5/50
51/51 [==============================] - 1s 12ms/step - loss: 2.1101 - accuracy: 0.3130 - val_loss: 2.3895 - val_accuracy: 0.2167
Epoch 6/50
51/51 [==============================] - 1s 12ms/step - loss: 1.8293 - accuracy: 0.4080 - val_loss: 2.2831 - val_accuracy: 0.2500
Epoch 7/50
51/51 [==============================] - 1s 12ms/step - loss: 1.4679 - accuracy: 0.5043 - val_loss: 2.3525 - val_accuracy: 0.2611
Epoch 8/50
51/51 [==============================] - 1s 12ms/step - loss: 1.0419 - accuracy: 0.6667 - val_loss: 2.6046 - val_accuracy: 0.2611
Epoch 9/50
51/51 [==============================] - 1s 12ms/step - loss: 0.7165 - accuracy: 0.7599 - val_loss: 2.9672 - val_accuracy: 0.2944
Epoch 10/50
51/51 [==============================] - 1s 12ms/step - loss: 0.4798 - accuracy: 0.8451 - val_loss: 3.3681 - val_accuracy: 0.3278
Epoch 11/50
51/51 [==============================] - 1s 12ms/step - loss: 0.3122 - accuracy: 0.8981 - val_loss: 4.1233 - val_accuracy: 0.3167
Epoch 12/50
51/51 [==============================] - 1s 12ms/step - loss: 0.2503 - accuracy: 0.9204 - val_loss: 4.4343 - val_accuracy: 0.3444
Epoch 13/50
51/51 [==============================] - 1s 12ms/step - loss: 0.2311 - accuracy: 0.9272 - val_loss: 4.2779 - val_accuracy: 0.3222
Epoch 14/50
51/51 [==============================] - 1s 12ms/step - loss: 0.1623 - accuracy: 0.9444 - val_loss: 4.6655 - val_accuracy: 0.3389
Epoch 15/50
51/51 [==============================] - 1s 12ms/step - loss: 0.1790 - accuracy: 0.9438 - val_loss: 4.6963 - val_accuracy: 0.3278
Epoch 16/50
51/51 [==============================] - 1s 12ms/step - loss: 0.1234 - accuracy: 0.9605 - val_loss: 5.3258 - val_accuracy: 0.3167
Epoch 17/50
51/51 [==============================] - 1s 12ms/step - loss: 0.1344 - accuracy: 0.9623 - val_loss: 5.1431 - val_accuracy: 0.3000
Epoch 18/50
51/51 [==============================] - 1s 12ms/step - loss: 0.1094 - accuracy: 0.9654 - val_loss: 5.7386 - val_accuracy: 0.3000
Epoch 19/50
51/51 [==============================] - 1s 12ms/step - loss: 0.1050 - accuracy: 0.9691 - val_loss: 5.4096 - val_accuracy: 0.3444
Epoch 20/50
51/51 [==============================] - 1s 12ms/step - loss: 0.0791 - accuracy: 0.9784 - val_loss: 5.7964 - val_accuracy: 0.3389
Epoch 21/50
51/51 [==============================] - 1s 12ms/step - loss: 0.0507 - accuracy: 0.9802 - val_loss: 6.0106 - val_accuracy: 0.3056
Epoch 22/50
51/51 [==============================] - 1s 12ms/step - loss: 0.0691 - accuracy: 0.9821 - val_loss: 6.0377 - val_accuracy: 0.3222
Epoch 23/50
51/51 [==============================] - 1s 12ms/step - loss: 0.0781 - accuracy: 0.9753 - val_loss: 5.2907 - val_accuracy: 0.3278
Epoch 24/50
51/51 [==============================] - 1s 12ms/step - loss: 0.0638 - accuracy: 0.9753 - val_loss: 5.6900 - val_accuracy: 0.3500
Epoch 25/50
51/51 [==============================] - 1s 12ms/step - loss: 0.0575 - accuracy: 0.9796 - val_loss: 5.6980 - val_accuracy: 0.2944
Epoch 26/50
51/51 [==============================] - 1s 12ms/step - loss: 0.0595 - accuracy: 0.9827 - val_loss: 6.0583 - val_accuracy: 0.3111
Epoch 27/50
51/51 [==============================] - 1s 12ms/step - loss: 0.0567 - accuracy: 0.9858 - val_loss: 6.1378 - val_accuracy: 0.3222
Epoch 28/50
51/51 [==============================] - 1s 12ms/step - loss: 0.0506 - accuracy: 0.9846 - val_loss: 6.3385 - val_accuracy: 0.3167
Epoch 29/50
51/51 [==============================] - 1s 12ms/step - loss: 0.0480 - accuracy: 0.9821 - val_loss: 6.7924 - val_accuracy: 0.3056
Epoch 30/50
51/51 [==============================] - 1s 12ms/step - loss: 0.0511 - accuracy: 0.9809 - val_loss: 6.8792 - val_accuracy: 0.2944
Epoch 31/50
51/51 [==============================] - 1s 12ms/step - loss: 0.0364 - accuracy: 0.9883 - val_loss: 7.1183 - val_accuracy: 0.3500
Epoch 32/50
51/51 [==============================] - 1s 12ms/step - loss: 0.0635 - accuracy: 0.9802 - val_loss: 6.4413 - val_accuracy: 0.3111
Epoch 33/50
51/51 [==============================] - 1s 12ms/step - loss: 0.0429 - accuracy: 0.9883 - val_loss: 6.2766 - val_accuracy: 0.3222
Epoch 34/50
51/51 [==============================] - 1s 12ms/step - loss: 0.0494 - accuracy: 0.9833 - val_loss: 6.5672 - val_accuracy: 0.2889
Epoch 35/50
51/51 [==============================] - 1s 12ms/step - loss: 0.0334 - accuracy: 0.9870 - val_loss: 7.1044 - val_accuracy: 0.3778
Epoch 36/50
51/51 [==============================] - 1s 12ms/step - loss: 0.0367 - accuracy: 0.9877 - val_loss: 6.9240 - val_accuracy: 0.3389
Epoch 37/50
51/51 [==============================] - 1s 12ms/step - loss: 0.0371 - accuracy: 0.9870 - val_loss: 6.8424 - val_accuracy: 0.3111
Epoch 38/50
51/51 [==============================] - 1s 12ms/step - loss: 0.0251 - accuracy: 0.9914 - val_loss: 7.3906 - val_accuracy: 0.3222
Epoch 39/50
51/51 [==============================] - 1s 12ms/step - loss: 0.0364 - accuracy: 0.9877 - val_loss: 7.3986 - val_accuracy: 0.3056
Epoch 40/50
51/51 [==============================] - 1s 12ms/step - loss: 0.0302 - accuracy: 0.9889 - val_loss: 7.7252 - val_accuracy: 0.2833
Epoch 41/50
51/51 [==============================] - 1s 12ms/step - loss: 0.0618 - accuracy: 0.9815 - val_loss: 6.4630 - val_accuracy: 0.2889
Epoch 42/50
51/51 [==============================] - 1s 12ms/step - loss: 0.0448 - accuracy: 0.9827 - val_loss: 6.6955 - val_accuracy: 0.3000
Epoch 43/50
51/51 [==============================] - 1s 12ms/step - loss: 0.0290 - accuracy: 0.9914 - val_loss: 6.4365 - val_accuracy: 0.3000
Epoch 44/50
51/51 [==============================] - 1s 12ms/step - loss: 0.0245 - accuracy: 0.9926 - val_loss: 6.4801 - val_accuracy: 0.3000
Epoch 45/50
51/51 [==============================] - 1s 12ms/step - loss: 0.0239 - accuracy: 0.9926 - val_loss: 6.9107 - val_accuracy: 0.3000
Epoch 46/50
51/51 [==============================] - 1s 12ms/step - loss: 0.0307 - accuracy: 0.9907 - val_loss: 6.4642 - val_accuracy: 0.3000
Epoch 47/50
51/51 [==============================] - 1s 12ms/step - loss: 0.0313 - accuracy: 0.9901 - val_loss: 7.5312 - val_accuracy: 0.2833
Epoch 48/50
51/51 [==============================] - 1s 12ms/step - loss: 0.0284 - accuracy: 0.9926 - val_loss: 7.2625 - val_accuracy: 0.2889
Epoch 49/50
51/51 [==============================] - 1s 12ms/step - loss: 0.0538 - accuracy: 0.9840 - val_loss: 6.8608 - val_accuracy: 0.2833
Epoch 50/50
51/51 [==============================] - 1s 12ms/step - loss: 0.0390 - accuracy: 0.9889 - val_loss: 6.9927 - val_accuracy: 0.3111

在我使用的算法中最好的结果是:loss: 0.0087 - accuracy: 0.9975 - val_loss: 1.2887 - val_accuracy: 0.6778

五、模型评估

1. Loss与Accuracy图

acc = history.history['accuracy']
val_acc = history.history['val_accuracy']

loss = history.history['loss']
val_loss = history.history['val_loss']

epochs_range = range(len(loss))

plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')

plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()

在这里插入图片描述

2. 指定图片进行预测

# 加载效果最好的模型权重
model.load_weights('best_model.h5')
from PIL import Image
import numpy as np

img = Image.open("./48-data/Jennifer Lawrence/003_963a3627.jpg")  #这里选择你需要预测的图片
image = tf.image.resize(img, [img_height, img_width])

img_array = tf.expand_dims(image, 0) 

predictions = model.predict(img_array) # 这里选用你已经训练好的模型
print("预测结果为:",class_names[np.argmax(predictions)])
预测结果为: Jennifer Lawrence


评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值