深度学习 Day4——T4猴痘病识别


前言

本文将采用CNN实现猴痘病和其他皮肤病的分类识别。较上篇文章,本文在训练模型时增加callbacks,设置了仅保存最佳模型权重,后续可直接加载模型训练好的模型进行识别。简单讲述实现代码与执行结果,并浅谈涉及知识点,涉及知识点包括。
关键字:ModelCheckpoint详解,EarlyStopping详解,ModelCheckpoint与EarlyStopping样例示范,model.fit()详解,tf.image.resize详解,图像预测部分代码详解。

一、我的环境

  • 电脑系统:Windows 11
  • 语言环境:python 3.8.6
  • 编译器:pycharm
  • 深度学习环境:TensorFlow 2.10.1

二、代码实现与执行结果

1.引入库

from PIL import Image
import numpy as np
from pathlib import Path
import tensorflow as tf
from tensorflow.keras import datasets, layers, models
from tensorflow.keras.callbacks import ModelCheckpoint
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')  # 忽略一些warning内容,无需打印

2.设置GPU(如果使用的是CPU可以忽略这步)

'''前期工作-设置GPU(如果使用的是CPU可以忽略这步)'''
gpus = tf.config.list_physical_devices("GPU")
if gpus:
    gpu0 = gpus[0]  # 如果有多个GPU,仅使用第0个GPU
    tf.config.experimental.set_memory_growth(gpu0, True)  # 设置GPU显存用量按需使用
    tf.config.set_visible_devices([gpu0], "GPU")

本人电脑无独显,故该步骤被注释,未执行。

3.导入数据

[猴痘病数据](https://pan.baidu.com/s/11r_uOUV0ToMNXQtxahb0yg?pwd=7qtp)

'''前期工作-导入数据'''
data_dir = r"D:\DeepLearning\data\monkeypox_recognition"
data_dir = Path(data_dir)

4.查看数据

'''前期工作-查看数据'''
image_count = len(list(data_dir.glob('*/*.jpg')))
print("图片总数为:", image_count)
Monkeypox = list(data_dir.glob('Monkeypox/*.jpg'))
image = PIL.Image.open(str(Monkeypox[0]))
# 查看图像实例的属性
print(image.format, image.size, image.mode)
plt.imshow(image)
plt.show()

执行结果:

图片总数为: 2142
JPEG (224, 224) RGB

在这里插入图片描述

5.加载数据

 '''数据预处理-加载数据'''
batch_size = 32
img_height = 224
img_width = 224
"""
关于image_dataset_from_directory()的详细介绍可以参考文章:https://mtyjkh.blog.csdn.net/article/details/117018789
"""
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
    data_dir,
    validation_split=0.2,
    subset="training",
    seed=123,
    image_size=(img_height, img_width),
    batch_size=batch_size)
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
    data_dir,
    validation_split=0.2,
    subset="validation",
    seed=123,
    image_size=(img_height, img_width),
    batch_size=batch_size)
# 我们可以通过class_names输出数据集的标签。标签将按字母顺序对应于目录名称。
class_names = train_ds.class_names
print(class_names)

运行结果:

图片总数为: 2142
JPEG (224, 224) RGB
Found 2142 files belonging to 2 classes.
Using 1714 files for training.
Found 2142 files belonging to 2 classes.
Using 428 files for validation.
['Monkeypox', 'Others']

6.可视化数据

'''数据预处理-可视化数据'''
plt.figure(figsize=(25, 20))
for images, labels in train_ds.take(1):
    for i in range(20):
        ax = plt.subplot(5, 4, i + 1)
        plt.imshow(images[i].numpy().astype("uint8"))
        plt.title(class_names[labels[i]], fontsize=40)
        plt.axis("off")
# 显示图片
plt.show()

在这里插入图片描述

7.再次检查数据

'''数据预处理-再次检查数据'''
# Image_batch是形状的张量(32,180,180,3)。这是一批形状180x180x3的32张图片(最后一维指的是彩色通道RGB)。
# Label_batch是形状(32,)的张量,这些标签对应32张图片
for image_batch, labels_batch in train_ds:
    print(image_batch.shape)
    print(labels_batch.shape)
    break

运行结果

(32, 180, 180, 3)
(32,)

8.配置数据集

本人电脑无GPU加速,故并未起到加速作用

'''数据预处理-配置数据集'''
AUTOTUNE = tf.data.AUTOTUNE
# shuffle():打乱数据,关于此函数的详细介绍可以参考:https://zhuanlan.zhihu.com/p/42417456
# prefetch():预取数据,加速运行
# cache():将数据集缓存到内存当中,加速运行
train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
本人电脑不带GPU,故prefetch无效

9.构建CNN网络模型

'''构建CNN网络'''
num_classes = 2
"""
关于卷积核的计算不懂的可以参考文章:https://blog.csdn.net/qq_38251616/article/details/114278995
layers.Dropout(0.4) 作用是防止过拟合,提高模型的泛化能力。
在上一篇文章花朵识别中,训练准确率与验证准确率相差巨大就是由于模型过拟合导致的
关于Dropout层的更多介绍可以参考文章:https://mtyjkh.blog.csdn.net/article/details/115826689
"""
model = models.Sequential([
    layers.experimental.preprocessing.Rescaling(1. / 255, input_shape=(img_height, img_width, 3)),

    layers.Conv2D(16, (3, 3), activation='relu', input_shape=(img_height, img_width, 3)),  # 卷积层1,卷积核3*3
    layers.AveragePooling2D((2, 2)),  # 池化层1,2*2采样
    layers.Conv2D(32, (3, 3), activation='relu'),  # 卷积层2,卷积核3*3
    layers.AveragePooling2D((2, 2)),  # 池化层2,2*2采样
    layers.Dropout(0.3),
    layers.Conv2D(64, (3, 3), activation='relu'),  # 卷积层3,卷积核3*3
    layers.Dropout(0.3),

    layers.Flatten(),  # Flatten层,连接卷积层与全连接层
    layers.Dense(128, activation='relu'),  # 全连接层,特征进一步提取
    layers.Dense(num_classes)  # 输出层,输出预期结果
])

model.summary()  # 打印网络结构

网络结构结果如下:

Model: "sequential"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 rescaling (Rescaling)       (None, 224, 224, 3)       0         
                                                                 
 conv2d (Conv2D)             (None, 222, 222, 16)      448       
                                                                 
 average_pooling2d (AverageP  (None, 111, 111, 16)     0         
 ooling2D)                                                       
                                                                 
 conv2d_1 (Conv2D)           (None, 109, 109, 32)      4640      
                                                                 
 average_pooling2d_1 (Averag  (None, 54, 54, 32)       0         
 ePooling2D)                                                     
                                                                 
 dropout (Dropout)           (None, 54, 54, 32)        0         
                                                                 
 conv2d_2 (Conv2D)           (None, 52, 52, 64)        18496     
                                                                 
 dropout_1 (Dropout)         (None, 52, 52, 64)        0         
                                                                 
 flatten (Flatten)           (None, 173056)            0         
                                                                 
 dense (Dense)               (None, 128)               22151296  
                                                                 
 dense_1 (Dense)             (None, 2)                 258       
                                                                 
=================================================================
Total params: 22,175,138
Trainable params: 22,175,138
Non-trainable params: 0

10.编译模型

'''编译模型'''
# 设置优化器
opt = tf.keras.optimizers.Adam(learning_rate=0.001)
model.compile(optimizer=opt,
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
              metrics=['accuracy'])

11.训练模型

'''训练模型'''
epochs = 10
history = model.fit(
    train_ds,
    validation_data=val_ds,
    epochs=epochs

训练记录如下:

Epoch 1/50
54/54 [==============================] - ETA: 0s - loss: 0.7173 - accuracy: 0.5508
Epoch 1: val_accuracy improved from -inf to 0.62850, saving model to best_model.h5
54/54 [==============================] - 16s 284ms/step - loss: 0.7173 - accuracy: 0.5508 - val_loss: 0.6585 - val_accuracy: 0.6285
Epoch 2/50
54/54 [==============================] - ETA: 0s - loss: 0.6509 - accuracy: 0.6272
Epoch 2: val_accuracy improved from 0.62850 to 0.66822, saving model to best_model.h5
54/54 [==============================] - 16s 288ms/step - loss: 0.6509 - accuracy: 0.6272 - val_loss: 0.6203 - val_accuracy: 0.6682
Epoch 3/50
54/54 [==============================] - ETA: 0s - loss: 0.6089 - accuracy: 0.6628
Epoch 3: val_accuracy did not improve from 0.66822
54/54 [==============================] - 16s 295ms/step - loss: 0.6089 - accuracy: 0.6628 - val_loss: 0.6063 - val_accuracy: 0.6355
Epoch 4/50
54/54 [==============================] - ETA: 0s - loss: 0.5755 - accuracy: 0.7095
Epoch 4: val_accuracy improved from 0.66822 to 0.69626, saving model to best_model.h5
54/54 [==============================] - 18s 329ms/step - loss: 0.5755 - accuracy: 0.7095 - val_loss: 0.5675 - val_accuracy: 0.6963
Epoch 5/50
54/54 [==============================] - ETA: 0s - loss: 0.5384 - accuracy: 0.7252
Epoch 5: val_accuracy improved from 0.69626 to 0.72664, saving model to best_model.h5
54/54 [==============================] - 20s 372ms/step - loss: 0.5384 - accuracy: 0.7252 - val_loss: 0.5110 - val_accuracy: 0.7266
Epoch 6/50
54/54 [==============================] - ETA: 0s - loss: 0.4729 - accuracy: 0.7835
Epoch 6: val_accuracy improved from 0.72664 to 0.77804, saving model to best_model.h5
54/54 [==============================] - 20s 369ms/step - loss: 0.4729 - accuracy: 0.7835 - val_loss: 0.4879 - val_accuracy: 0.7780
Epoch 7/50
54/54 [==============================] - ETA: 0s - loss: 0.4367 - accuracy: 0.8069
Epoch 7: val_accuracy improved from 0.77804 to 0.79439, saving model to best_model.h5
54/54 [==============================] - 21s 380ms/step - loss: 0.4367 - accuracy: 0.8069 - val_loss: 0.4391 - val_accuracy: 0.7944
Epoch 8/50
54/54 [==============================] - ETA: 0s - loss: 0.4216 - accuracy: 0.8051
Epoch 8: val_accuracy did not improve from 0.79439
54/54 [==============================] - 19s 360ms/step - loss: 0.4216 - accuracy: 0.8051 - val_loss: 0.4387 - val_accuracy: 0.7921
Epoch 9/50
54/54 [==============================] - ETA: 0s - loss: 0.3928 - accuracy: 0.8337
Epoch 9: val_accuracy did not improve from 0.79439
54/54 [==============================] - 19s 361ms/step - loss: 0.3928 - accuracy: 0.8337 - val_loss: 0.5128 - val_accuracy: 0.7804
Epoch 10/50
54/54 [==============================] - ETA: 0s - loss: 0.3740 - accuracy: 0.8489
Epoch 10: val_accuracy improved from 0.79439 to 0.81776, saving model to best_model.h5
54/54 [==============================] - 21s 389ms/step - loss: 0.3740 - accuracy: 0.8489 - val_loss: 0.4257 - val_accuracy: 0.8178
Epoch 11/50
54/54 [==============================] - ETA: 0s - loss: 0.3290 - accuracy: 0.8623
Epoch 11: val_accuracy improved from 0.81776 to 0.83645, saving model to best_model.h5
54/54 [==============================] - 20s 368ms/step - loss: 0.3290 - accuracy: 0.8623 - val_loss: 0.3938 - val_accuracy: 0.8364
Epoch 12/50
54/54 [==============================] - ETA: 0s - loss: 0.3396 - accuracy: 0.8565
Epoch 12: val_accuracy did not improve from 0.83645
54/54 [==============================] - 19s 358ms/step - loss: 0.3396 - accuracy: 0.8565 - val_loss: 0.4382 - val_accuracy: 0.8271
Epoch 13/50
54/54 [==============================] - ETA: 0s - loss: 0.2985 - accuracy: 0.8821
Epoch 13: val_accuracy did not improve from 0.83645
54/54 [==============================] - 19s 359ms/step - loss: 0.2985 - accuracy: 0.8821 - val_loss: 0.4141 - val_accuracy: 0.8107
Epoch 14/50
54/54 [==============================] - ETA: 0s - loss: 0.2911 - accuracy: 0.8874
Epoch 14: val_accuracy did not improve from 0.83645
54/54 [==============================] - 20s 361ms/step - loss: 0.2911 - accuracy: 0.8874 - val_loss: 0.4168 - val_accuracy: 0.8318
Epoch 15/50
54/54 [==============================] - ETA: 0s - loss: 0.2607 - accuracy: 0.8979
Epoch 15: val_accuracy did not improve from 0.83645
54/54 [==============================] - 19s 361ms/step - loss: 0.2607 - accuracy: 0.8979 - val_loss: 0.4275 - val_accuracy: 0.8341
Epoch 16/50
54/54 [==============================] - ETA: 0s - loss: 0.2791 - accuracy: 0.8909
Epoch 16: val_accuracy improved from 0.83645 to 0.85047, saving model to best_model.h5
54/54 [==============================] - 20s 364ms/step - loss: 0.2791 - accuracy: 0.8909 - val_loss: 0.3908 - val_accuracy: 0.8505
Epoch 17/50
54/54 [==============================] - ETA: 0s - loss: 0.2478 - accuracy: 0.9061
Epoch 17: val_accuracy improved from 0.85047 to 0.85748, saving model to best_model.h5
54/54 [==============================] - 19s 360ms/step - loss: 0.2478 - accuracy: 0.9061 - val_loss: 0.3935 - val_accuracy: 0.8575
Epoch 18/50
54/54 [==============================] - ETA: 0s - loss: 0.2200 - accuracy: 0.9172
Epoch 18: val_accuracy did not improve from 0.85748
54/54 [==============================] - 19s 358ms/step - loss: 0.2200 - accuracy: 0.9172 - val_loss: 0.3799 - val_accuracy: 0.8505
Epoch 19/50
54/54 [==============================] - ETA: 0s - loss: 0.2385 - accuracy: 0.9008
Epoch 19: val_accuracy did not improve from 0.85748
54/54 [==============================] - 21s 392ms/step - loss: 0.2385 - accuracy: 0.9008 - val_loss: 0.3905 - val_accuracy: 0.8505
Epoch 20/50
54/54 [==============================] - ETA: 0s - loss: 0.2155 - accuracy: 0.9201
Epoch 20: val_accuracy did not improve from 0.85748
54/54 [==============================] - 20s 376ms/step - loss: 0.2155 - accuracy: 0.9201 - val_loss: 0.4216 - val_accuracy: 0.8458
Epoch 21/50
54/54 [==============================] - ETA: 0s - loss: 0.2216 - accuracy: 0.9125
Epoch 21: val_accuracy improved from 0.85748 to 0.86215, saving model to best_model.h5
54/54 [==============================] - 21s 388ms/step - loss: 0.2216 - accuracy: 0.9125 - val_loss: 0.3887 - val_accuracy: 0.8621
Epoch 22/50
54/54 [==============================] - ETA: 0s - loss: 0.1848 - accuracy: 0.9317
Epoch 22: val_accuracy improved from 0.86215 to 0.86449, saving model to best_model.h5
54/54 [==============================] - 21s 385ms/step - loss: 0.1848 - accuracy: 0.9317 - val_loss: 0.3889 - val_accuracy: 0.8645
Epoch 23/50
54/54 [==============================] - ETA: 0s - loss: 0.1777 - accuracy: 0.9329
Epoch 23: val_accuracy improved from 0.86449 to 0.86916, saving model to best_model.h5
54/54 [==============================] - 21s 399ms/step - loss: 0.1777 - accuracy: 0.9329 - val_loss: 0.4060 - val_accuracy: 0.8692
Epoch 24/50
54/54 [==============================] - ETA: 0s - loss: 0.1756 - accuracy: 0.9347
Epoch 24: val_accuracy did not improve from 0.86916
54/54 [==============================] - 22s 407ms/step - loss: 0.1756 - accuracy: 0.9347 - val_loss: 0.3895 - val_accuracy: 0.8621
Epoch 25/50
54/54 [==============================] - ETA: 0s - loss: 0.1831 - accuracy: 0.9277
Epoch 25: val_accuracy did not improve from 0.86916
54/54 [==============================] - 21s 398ms/step - loss: 0.1831 - accuracy: 0.9277 - val_loss: 0.4328 - val_accuracy: 0.8458
Epoch 26/50
54/54 [==============================] - ETA: 0s - loss: 0.1698 - accuracy: 0.9376
Epoch 26: val_accuracy did not improve from 0.86916
54/54 [==============================] - 22s 403ms/step - loss: 0.1698 - accuracy: 0.9376 - val_loss: 0.3958 - val_accuracy: 0.8575
Epoch 27/50
54/54 [==============================] - ETA: 0s - loss: 0.1492 - accuracy: 0.9475
Epoch 27: val_accuracy did not improve from 0.86916
54/54 [==============================] - 22s 410ms/step - loss: 0.1492 - accuracy: 0.9475 - val_loss: 0.3963 - val_accuracy: 0.8551
Epoch 28/50
54/54 [==============================] - ETA: 0s - loss: 0.1668 - accuracy: 0.9335
Epoch 28: val_accuracy did not improve from 0.86916
54/54 [==============================] - 22s 416ms/step - loss: 0.1668 - accuracy: 0.9335 - val_loss: 0.4133 - val_accuracy: 0.8575
Epoch 29/50
54/54 [==============================] - ETA: 0s - loss: 0.1313 - accuracy: 0.9597
Epoch 29: val_accuracy improved from 0.86916 to 0.87617, saving model to best_model.h5
54/54 [==============================] - 23s 426ms/step - loss: 0.1313 - accuracy: 0.9597 - val_loss: 0.4363 - val_accuracy: 0.8762
Epoch 30/50
54/54 [==============================] - ETA: 0s - loss: 0.1333 - accuracy: 0.9487
Epoch 30: val_accuracy did not improve from 0.87617
54/54 [==============================] - 23s 431ms/step - loss: 0.1333 - accuracy: 0.9487 - val_loss: 0.4202 - val_accuracy: 0.8692
Epoch 31/50
54/54 [==============================] - ETA: 0s - loss: 0.1202 - accuracy: 0.9568
Epoch 31: val_accuracy did not improve from 0.87617
54/54 [==============================] - 23s 419ms/step - loss: 0.1202 - accuracy: 0.9568 - val_loss: 0.4313 - val_accuracy: 0.8668
Epoch 32/50
54/54 [==============================] - ETA: 0s - loss: 0.1319 - accuracy: 0.9457
Epoch 32: val_accuracy improved from 0.87617 to 0.88084, saving model to best_model.h5
54/54 [==============================] - 24s 446ms/step - loss: 0.1319 - accuracy: 0.9457 - val_loss: 0.4308 - val_accuracy: 0.8808
Epoch 33/50
54/54 [==============================] - ETA: 0s - loss: 0.1106 - accuracy: 0.9603
Epoch 33: val_accuracy did not improve from 0.88084
54/54 [==============================] - 24s 448ms/step - loss: 0.1106 - accuracy: 0.9603 - val_loss: 0.4283 - val_accuracy: 0.8715
Epoch 34/50
54/54 [==============================] - ETA: 0s - loss: 0.1268 - accuracy: 0.9522
Epoch 34: val_accuracy did not improve from 0.88084
54/54 [==============================] - 24s 445ms/step - loss: 0.1268 - accuracy: 0.9522 - val_loss: 0.4150 - val_accuracy: 0.8738
Epoch 35/50
54/54 [==============================] - ETA: 0s - loss: 0.1112 - accuracy: 0.9574
Epoch 35: val_accuracy did not improve from 0.88084
54/54 [==============================] - 23s 431ms/step - loss: 0.1112 - accuracy: 0.9574 - val_loss: 0.4668 - val_accuracy: 0.8481
Epoch 36/50
54/54 [==============================] - ETA: 0s - loss: 0.1367 - accuracy: 0.9457
Epoch 36: val_accuracy did not improve from 0.88084
54/54 [==============================] - 23s 416ms/step - loss: 0.1367 - accuracy: 0.9457 - val_loss: 0.4071 - val_accuracy: 0.8715
Epoch 37/50
54/54 [==============================] - ETA: 0s - loss: 0.0974 - accuracy: 0.9621
Epoch 37: val_accuracy did not improve from 0.88084
54/54 [==============================] - 23s 424ms/step - loss: 0.0974 - accuracy: 0.9621 - val_loss: 0.4489 - val_accuracy: 0.8715
Epoch 38/50
54/54 [==============================] - ETA: 0s - loss: 0.0980 - accuracy: 0.9685
Epoch 38: val_accuracy did not improve from 0.88084
54/54 [==============================] - 23s 426ms/step - loss: 0.0980 - accuracy: 0.9685 - val_loss: 0.4423 - val_accuracy: 0.8715
Epoch 39/50
54/54 [==============================] - ETA: 0s - loss: 0.0851 - accuracy: 0.9714
Epoch 39: val_accuracy improved from 0.88084 to 0.88551, saving model to best_model.h5
54/54 [==============================] - 24s 440ms/step - loss: 0.0851 - accuracy: 0.9714 - val_loss: 0.4452 - val_accuracy: 0.8855
Epoch 40/50
54/54 [==============================] - ETA: 0s - loss: 0.0910 - accuracy: 0.9667
Epoch 40: val_accuracy did not improve from 0.88551
54/54 [==============================] - 24s 447ms/step - loss: 0.0910 - accuracy: 0.9667 - val_loss: 0.5314 - val_accuracy: 0.8551
Epoch 41/50
54/54 [==============================] - ETA: 0s - loss: 0.0821 - accuracy: 0.9778
Epoch 41: val_accuracy did not improve from 0.88551
54/54 [==============================] - 25s 460ms/step - loss: 0.0821 - accuracy: 0.9778 - val_loss: 0.4437 - val_accuracy: 0.8762
Epoch 42/50
54/54 [==============================] - ETA: 0s - loss: 0.0673 - accuracy: 0.9778
Epoch 42: val_accuracy did not improve from 0.88551
54/54 [==============================] - 23s 418ms/step - loss: 0.0673 - accuracy: 0.9778 - val_loss: 0.4765 - val_accuracy: 0.8785
Epoch 43/50
54/54 [==============================] - ETA: 0s - loss: 0.0737 - accuracy: 0.9772
Epoch 43: val_accuracy did not improve from 0.88551
54/54 [==============================] - 24s 436ms/step - loss: 0.0737 - accuracy: 0.9772 - val_loss: 0.4521 - val_accuracy: 0.8692
Epoch 44/50
54/54 [==============================] - ETA: 0s - loss: 0.0689 - accuracy: 0.9813
Epoch 44: val_accuracy did not improve from 0.88551
54/54 [==============================] - 24s 436ms/step - loss: 0.0689 - accuracy: 0.9813 - val_loss: 0.4629 - val_accuracy: 0.8785
Epoch 45/50
54/54 [==============================] - ETA: 0s - loss: 0.0679 - accuracy: 0.9761
Epoch 45: val_accuracy did not improve from 0.88551
54/54 [==============================] - 24s 436ms/step - loss: 0.0679 - accuracy: 0.9761 - val_loss: 0.4785 - val_accuracy: 0.8762
Epoch 46/50
54/54 [==============================] - ETA: 0s - loss: 0.0618 - accuracy: 0.9796
Epoch 46: val_accuracy did not improve from 0.88551
54/54 [==============================] - 23s 417ms/step - loss: 0.0618 - accuracy: 0.9796 - val_loss: 0.5303 - val_accuracy: 0.8645
Epoch 47/50
54/54 [==============================] - ETA: 0s - loss: 0.0638 - accuracy: 0.9813
Epoch 47: val_accuracy did not improve from 0.88551
54/54 [==============================] - 23s 421ms/step - loss: 0.0638 - accuracy: 0.9813 - val_loss: 0.5569 - val_accuracy: 0.8621
Epoch 48/50
54/54 [==============================] - ETA: 0s - loss: 0.0785 - accuracy: 0.9767
Epoch 48: val_accuracy did not improve from 0.88551
54/54 [==============================] - 23s 418ms/step - loss: 0.0785 - accuracy: 0.9767 - val_loss: 0.5126 - val_accuracy: 0.8715
Epoch 49/50
54/54 [==============================] - ETA: 0s - loss: 0.0485 - accuracy: 0.9895
Epoch 49: val_accuracy did not improve from 0.88551
54/54 [==============================] - 23s 424ms/step - loss: 0.0485 - accuracy: 0.9895 - val_loss: 0.5061 - val_accuracy: 0.8738
Epoch 50/50
54/54 [==============================] - ETA: 0s - loss: 0.0607 - accuracy: 0.9790
Epoch 50: val_accuracy did not improve from 0.88551
54/54 [==============================] - 23s 428ms/step - loss: 0.0607 - accuracy: 0.9790 - val_loss: 0.5467 - val_accuracy: 0.8692

12.模型评估

'''模型评估'''
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()

执行结果
在这里插入图片描述

13.指定图片进行预测

'''指定图片进行预测'''
# 加载效果最好的模型权重
model.load_weights('best_model.h5')
# img = Image.open("D:/DeepLearning/data/monkeypox_recognition/Others/NM01_01_00.jpg")  #这里选择你需要预测的图片
img = Image.open(r"D:\DeepLearning\data\monkeypox_recognition\Monkeypox\M24_01_04.jpg")  #这里选择你需要预测的图片
print("type(img):", type(img), "img.size:", img.size)
# 对数据进行处理
img_array0 = np.asarray(img)  # img.size=(224,224),tf.image.resize需传入3维或四维向量,采用np.asarray转换img_array.shape=(224,224,3)
print("img_array0.shape:", img_array0.shape)
image = tf.image.resize(img_array0, [img_height, img_width])
image1 = tf.keras.utils.array_to_img(image)
print("image:", type(image), image.shape)
print("image1:", type(image1), image1.size)
plt.imshow(image1)
plt.show()
img_array = tf.expand_dims(image, 0)
print("img_array:", type(img_array), img_array.shape)
predictions = model.predict(img_array)  # 这里选用你已经训练好的模型
print("预测结果为:", class_names[np.argmax(predictions)])
type(img): <class 'PIL.JpegImagePlugin.JpegImageFile'> img.size: (224, 224)
img_array0.shape: (224, 224, 3)
image: <class 'tensorflow.python.framework.ops.EagerTensor'> (224, 224, 3)
image1: <class 'PIL.Image.Image'> (224, 224)
img_array: <class 'tensorflow.python.framework.ops.EagerTensor'> (1, 224, 224, 3)
1/1 [==============================] - 0s 91ms/step
预测结果为: Monkeypox

三、知识点详解

1.ModelCheckpoint详解

函数原型

keras.callbacks.ModelCheckpoint(filepath, 
								monitor='val_loss', 
								verbose=0, 
								save_best_only=False, 
								save_weights_only=False, 
								mode='auto', 
								period=1)

作用

该回调函数按固定间隔存储模型到filepath,默认是每个epoch存储一次模型

参数

  • filename:字符串,保存模型的路径,filepath可以是格式化的字符串,里面的占位符将会被epoch值和传入on_epoch_end的logs关键字所填入。
    例如:
    filepath = "weights_{epoch:03d}-{val_loss:.4f}.h5"
    则会生成对应epoch和验证集loss的多个文件。
  • monitor:需要监视的值,通常为:val_acc 或 val_loss 或 acc 或 loss
  • verbose:信息展示模式,0或1。为1表示输出epoch模型保存信息,默认为0表示不输出该信息,信息形如:
    Epoch 00001: val_acc improved from -inf to 0.49240, saving model to /xxx/checkpoint/model_001-0.3902.h5
  • save_best_only:当设置为True时,将只保存在验证集上性能最好的模型
  • mode:‘auto’,‘min’,‘max’之一,在save_best_only=True时决定性能最佳模型的评判准则,例如,当监测值为val_acc时,模式应为max,当检测值为val_loss时,模式应为min。在auto模式下,评价准则由被监测值的名字自动推断。
  • save_weights_only:若设置为True,则只保存模型权重,否则将保存整个模型(包括模型结构,配置信息等)
  • period:CheckPoint之间的间隔的epoch数

Tips

  • ModelCheckPoint当filepath不采用格式化字符串时,仅存一个模型档,每次存储都会覆盖上一次存储的模型。因此若save_best_only为False,最终存储的一个模型将是最后一次epoch的结果,若为True将是最佳epoch的结果。若采用格式化字符串,将会按存储顺序存多个模型档。
  • ModelCheckPoint 的 save_best_only 控制存储的模型是否保留最佳参数。

2.EarlyStopping详解

函数原型

keras.callbacks.EarlyStopping(
    monitor="acc",
    min_delta=0,
    patience=0,
    verbose=0,
    mode="max",
    baseline=None,
    restore_best_weights=False,
)

作用
当模型训练在指定epoch次都没有提升的情况下,该回调函数提前停止训练。

参数

  • monitor:监控的数据接口,有’acc’,’val_acc’,’loss’,’val_loss’等等。正常情况下如果有验证集,就用’val_acc’或者’val_loss’。
    min_delta:增大或减小的阈值,只有大于这个部分才算作改善(监控的数据不同,变大变小就不确定)。这个值的大小取决于monitor,也反映了你的容忍程度。
  • patience:当early stop被激活(如发现loss相比上patience个epoch训练没有下降),则经过patience个epoch后停止训练。例如监控的是’acc’,那么就设置为’max’。patience的大小和learning rate直接相关。在learning rate设定的情况下,前期先训练几次观察抖动的epoch number,patience设置的值应当稍大于epoch number。在learning rate变化的情况下,建议要略小于最大的抖动epoch number。
  • verbose:信息展示模式
  • mode:‘auto’,‘min’,‘max’之一,在min模式下,如果检测值停止下降则中止训练。在max模式下,当检测值不再上升则停止训练。
  • baseline:监控数据的基线值,如果在训练过程中,模型训练结果相比于基线值没有什么改善的话,就停止训练。
  • restore_best_weights:若为True,将会取整个训练过程中最佳监控值的epoch训练结果作为最终模型权值,否则将以最后一次epoch的结果作为最终模型权值。

Tips

  • EarlyStopping 的 restore_best_weights 控制模型训练提前结束后,接下去运行代码中的模型是否保留最佳参数

3.ModelCheckpoint与EarlyStopping样例示范

from tensorflow.keras.callbacks import ModelCheckpoint, Callback, EarlyStopping
earlystopper = EarlyStopping(
							monitor='loss', 
							patience=1, 
							verbose=1,
							mode = 'min')
checkpointer = ModelCheckpoint('best_model.h5',
                                monitor='val_accuracy',
                                verbose=0,
                                save_best_only=True,
                                save_weights_only=True,
                                mode = 'max')
train_model  = model.fit(train_ds,
                  epochs=epochs,
                  validation_data=test_ds,
                  callbacks=[earlystopper, checkpointer]#<-看这儿)

4.model.fit()详解

函数原型

fit( x=None, #输入的x值
     y=None, #输入的y标签值
     batch_size=None, #整数 ,每次梯度更新的样本数即批量大小。未指定,默认为32。
     epochs=1, #迭代次数
     verbose=1, #整数,代表以什么形式来展示日志状态
     callbacks=None, #回调函数,这个list中的回调函数将会在训练过程中的适当时机被调用,参考回调函数
     validation_split=0.0, #浮点数0-1之间,用作验证集的训练数据的比例。模型将分出一部分不会被训练的验证数据,并将在每一轮结束时评估这些验证数据的误差和任何其他模型指标。
     validation_data=None, #这个参数会覆盖 validation_split,即两个函数只能存在一个,它的输入为元组 (x_val,y_val),这作为验证数据。
     shuffle=True, #布尔值。是否在每轮迭代之前混洗数据
     class_weight=None,
     sample_weight=None, 
     initial_epoch=0, 
     steps_per_epoch=None, #一个epoch包含的步数(每一步是一个batch的数据送入),当使用如TensorFlow数据Tensor之类的输入张量进行训练时,默认的None代表自动分割,即数据集样本数/batch样本数。
     validation_steps=None, #在验证集上的step总数,仅当steps_per_epoch被指定时有用。
     validation_freq=1, #指使用验证集实施验证的频率。当等于1时代表每个epoch

函数说明
将训练数据在模型中训练一定次数,返回loss和测量指标

参数

  • x:训练集的输入特征
  • y:训练集的标签
  • batch_size:整数,指定进行梯度下降时每个batch包含的样本数。训练时一个batch的样本会被计算一次梯度下降,使目标函数优化一步。
  • epochs:迭代次数,训练终止时的epoch值,训练将在达到该epoch值时停止,当没有设置initial_epoch时,它就是训练的总轮数,否则训练的总轮数为epochs - inital_epoch
  • verbose:日志显示,0为不在标准输出流输出日志信息,1为输出进度条记录,2为每个epoch输出一行记录
  • callbacks:list,其中的元素是keras.callbacks.Callback的对象。这个list中的回调函数将会在训练过程中的适当时机被调用,参考回调函数
  • validation_split:0~1之间的浮点数,用来指定训练集的一定比例数据作为验证集。验证集将不参与训练,并在每个epoch结束后测试的模型的指标,如损失函数、精确度等。注意,validation_split的划分在shuffle之前,因此如果你的数据本身是有序的,需要先手工打乱再指定validation_split,否则可能会出现验证集样本不均匀。
  • validation_data:形式为(X,y)的tuple,(测试集的输入特征,测试集的标签),是指定的验证集。此参数将覆盖validation_spilt,。
  • shuffle:布尔值或字符串,一般为布尔值,表示是否在训练过程中随机打乱输入样本的顺序。若为字符串“batch”,则是用来处理HDF5数据的特殊情况,它将在batch内部将数据打乱。
  • class_weight:字典,将不同的类别映射为不同的权值,该参数用来在训练过程中调整损失函数(只能用于训练)
  • sample_weight:权值的numpy
    array,用于在训练时调整损失函数(仅用于训练)。可以传递一个1D的与样本等长的向量用于对样本进行1对1的加权,或者在面对时序数据时,传递一个的形式为(samples,sequence_length)的矩阵来为每个时间步上的样本赋不同的权。这种情况下请确定在编译模型时添加了sample_weight_mode=’temporal’。
  • initial_epoch: 从该参数指定的epoch开始训练,在继续之前的训练时有用。
  • steps_per_epoch:一个epoch包含的步数(每一步是一个batch的数据送入),当使用如TensorFlow数据Tensor之类的输入张量进行训练时,默认的None代表自动分割,即数据集样本数/batch样本数。
  • validation_steps:在验证集上的step总数,仅当steps_per_epoch被指定时有用。
  • validation_freq:指使用验证集实施验证的频率。当等于1时代表每个epoch
    fit函数返回一个History的对象,其History.history属性记录了损失函数和其他指标的数值随epoch变化的情况,如果有验证集的话,也包含了验证集的这些指标变化情况

5.tf.image.resize详解

函数原型

tf.image.resize(
    images, size, method=ResizeMethod.BILINEAR, preserve_aspect_ratio=False,
    antialias=False, name=None
)

参数

  • images 形状为 [batch, height, width, channels] 的 4-D 张量或形状为 [height, width, channels] 的 3-D 张量。
  • size 2 个元素的一维 int32 张量:new_height, new_width。图像的新尺寸。
  • method image.ResizeMethod 或等效字符串。默认为 bilinear 。
  • preserve_aspect_ratio 是否保留纵横比。如果设置了此项,则 images 将调整为适合 size 的大小,同时保留原始图像的纵横比。如果 size 大于 image 的当前大小,则放大图像。默认为假。
  • antialias 对图像进行下采样时是否使用抗锯齿过滤器。
  • name 此操作的名称(可选)。

抛出

  • ValueError 如果 images 的形状与此函数的形状参数不兼容
  • ValueError 如果 size 的形状或类型无效。
  • ValueError 如果指定了不受支持的调整大小方法。

返回

  • 如果 images 是 4-D,则形状为 [batch, new_height, new_width, channels] 的 4-D 浮点张量。如果 images 是 3-D,则形状为 [new_height, new_width, channels] 的 3-D 浮点张量。

6.图像预测部分代码详解

img = Image.open(r"D:\DeepLearning\data\monkeypox_recognition\Monkeypox\M24_01_04.jpg")  

采用pillow方式读取图像,返回的数据类型为PIL.JpegImagePlugin.JpegImageFile,im.size 返回的是图片的像素大小,不涉及通道channel的问题 ,width,height,为(224,224)。

img_array0 = np.asarray(img)  # img.size=(224,224),tf.image.resize需传入3维或四维向量,采用np.asarray转换img_array.shape=(224,224,3)

而tf.image.resize参数mages 形状为 [batch, height, width, channels] 的 4-D 张量或形状为 [height, width, channels] 的 3-D 张量。因此,pillow读取的图像数据需要转换为3-D 张量或4-D 张量。可利用asarray的方法转为numpy格式,numpy读到的是 height,width,channel,为(224, 224, 3)。

image = tf.image.resize(img_array0, [img_height, img_width])
image1 = tf.keras.utils.array_to_img(image)

tf.image.resize之后的图像若要用plt.imshow()显示,则需转换为PIL格式的图像,采用 tf.keras.utils.array_to_img(image)进行格式转换。

 img_array = tf.expand_dims(image, 0)

TensorFlow中,想要维度增加一维,可以使用tf.expand_dims(input, dim, name=None)函数,若想在后面加一维可以将dim设为-1.

print("预测结果为:", class_names[np.argmax(predictions)])

np.argmax(predictions)为一个numpy数组中最大值的索引值,这里是一维数组的用法。函数原型numpy.argmax(a, axis=None, out=None)

  1. axis参数不出现时,将数组平铺,找出其中最大的那个值的index。
import numpy as np
a = np.array([
    [3, 2, 8, 4],
    [7, 2, 3, 1],
    [3, 9, 2, 4],
    [4, 1, 1, 6]
])
np.argmax(a)

输出:

9
  1. axis = 0 按行方向找出最大值的index
    在这里插入图片描述
import numpy as np
a = np.array([
    [3, 2, 8, 4],
    [7, 2, 3, 1],
    [3, 9, 2, 4],
    [4, 1, 1, 6]
])
a.argmax(0)

输出:

array([1, 2, 0, 3], dtype=int64)
  1. axis = 1 按列方向找出最大值的index

在这里插入图片描述

import numpy as np
a = np.array([
    [3, 2, 8, 4],
    [7, 2, 3, 1],
    [3, 9, 2, 4],
    [4, 1, 1, 6]
])
a.argmax(1)

输出:

array([2, 0, 1, 3], dtype=int64)

总结

通过本次的学习,能通过tenserflow框架创建cnn网络模型进行猴痘病识别,学习本章节时发现可以通过回调函数来控制模型存储方式以及提前结束训练,也了解到pillow读取jpg图像后的size只输出宽高,不能直接送给tf.imge.resize使用,可通过转为numpy数组传入。

  • 14
    点赞
  • 25
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值