深度学习笔记14_TensorFlow实现猴痘病识别

一、我的环境

1.语言环境:Python 3.9

2.编译器:Pycharm

3.深度学习环境:TensorFlow 2.10.0

二、GPU设置

       若使用的是cpu则可忽略

import tensorflow as tf
gpus = tf.config.list_physical_devices("GPU")

if gpus:
    gpu0 = gpus[0] #如果有多个GPU,仅使用第0个GPU
    tf.config.experimental.set_memory_growth(gpu0, True) #设置GPU显存用量按需使用
    tf.config.set_visible_devices([gpu0],"GPU")

、导入数据

data_dir = "./data/"
data_dir = pathlib.Path(data_dir)

image_count = len(list(data_dir.glob('*/*.jpg')))

print("图片总数为:",image_count)
#图片总数为:2142

、数据预处理

batch_size = 32
img_height = 224
img_width = 224

"""
关于image_dataset_from_directory()的详细介绍可以参考文章:https://mtyjkh.blog.csdn.net/article/details/117018789
"""
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
    data_dir,
    validation_split=0.2,
    subset="training",
    seed=123,
    image_size=(img_height, img_width),
    batch_size=batch_size)

"""
关于image_dataset_from_directory()的详细介绍可以参考文章:https://mtyjkh.blog.csdn.net/article/details/117018789
"""
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
    data_dir,
    validation_split=0.2,
    subset="validation",
    seed=123,
    image_size=(img_height, img_width),
    batch_size=batch_size)
class_names = train_ds.class_names
print(class_names)

运行结果: 

['Monkeypox', 'Others']

五、可视化图片

plt.figure(figsize=(20, 10))

for images, labels in train_ds.take(1):
    for i in range(20):
        ax = plt.subplot(5, 10, i + 1)

        plt.imshow(images[i].numpy().astype("uint8"))
        plt.title(class_names[labels[i]])
        
        plt.axis("off")
plt.show()

 运行结果:

​​

再次检查数据:

for image_batch, labels_batch in train_ds:
    print(image_batch.shape)
    print(labels_batch.shape)
    break

 运行结果:

(32, 224, 224, 3)
(32,)

六、配置数据集

  • shuffle():打乱数据,关于此函数的详细介绍可以参考:https://zhuanlan.zhihu.com/p/42417456
  • prefetch():预取数据,加速运行
  • cache():将数据集缓存到内存当中,加速运行
AUTOTUNE = tf.data.AUTOTUNE

train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)

七、构建CNN网络模型

        卷积神经网络(CNN)的输入是张量 (Tensor) 形式的 (image_height, image_width, color_channels),包含了图像高度、宽度及颜色信息。不需要输入batch size。color_channels 为 (R,G,B) 分别对应 RGB 的三个颜色通道(color channel)。在此示例中,我们的 CNN 输入形状是 (180, 180, 3)。我们需要在声明第一层时将形状赋值给参数input_shape

num_classes = 2

"""
关于卷积核的计算不懂的可以参考文章:https://blog.csdn.net/qq_38251616/article/details/114278995

layers.Dropout(0.4) 作用是防止过拟合,提高模型的泛化能力。
在上一篇文章花朵识别中,训练准确率与验证准确率相差巨大就是由于模型过拟合导致的

关于Dropout层的更多介绍可以参考文章:https://mtyjkh.blog.csdn.net/article/details/115826689
"""

model = models.Sequential([
    layers.experimental.preprocessing.Rescaling(1./255, input_shape=(img_height, img_width, 3)),
    
    layers.Conv2D(16, (3, 3), activation='relu', input_shape=(img_height, img_width, 3)), # 卷积层1,卷积核3*3  
    layers.AveragePooling2D((2, 2)),               # 池化层1,2*2采样
    layers.Conv2D(32, (3, 3), activation='relu'),  # 卷积层2,卷积核3*3
    layers.AveragePooling2D((2, 2)),               # 池化层2,2*2采样
    layers.Dropout(0.3),  
    layers.Conv2D(64, (3, 3), activation='relu'),  # 卷积层3,卷积核3*3
    layers.Dropout(0.3),  
    
    layers.Flatten(),                       # Flatten层,连接卷积层与全连接层
    layers.Dense(128, activation='relu'),   # 全连接层,特征进一步提取
    layers.Dense(num_classes)               # 输出层,输出预期结果
])

model.summary()  # 打印网络结构

运行结果:

_________________________________________________________________
 Layer (type)                Output Shape              Param #
=================================================================
 rescaling (Rescaling)       (None, 224, 224, 3)       0

 conv2d (Conv2D)             (None, 222, 222, 16)      448

 average_pooling2d (AverageP  (None, 111, 111, 16)     0
 ooling2D)

 conv2d_1 (Conv2D)           (None, 109, 109, 32)      4640

 average_pooling2d_1 (Averag  (None, 54, 54, 32)       0
 ePooling2D)

 dropout (Dropout)           (None, 54, 54, 32)        0

 conv2d_2 (Conv2D)           (None, 52, 52, 64)        18496

 dropout_1 (Dropout)         (None, 52, 52, 64)        0

 flatten (Flatten)           (None, 173056)            0

 dense (Dense)               (None, 128)               22151296

 dense_1 (Dense)             (None, 2)                 258

=================================================================
Total params: 22,175,138
Trainable params: 22,175,138
Non-trainable params: 0
_________________________________________________________________

八、编译

        在准备对模型进行训练之前,还需要再对其进行一些设置。以下内容是在模型的编译步骤中添加的:

  • 损失函数(loss):用于衡量模型在训练期间的准确率。
  • 优化器(optimizer):决定模型如何根据其看到的数据和自身的损失函数进行更新。
  • 指标(metrics):用于监控训练和测试步骤。以下示例使用了准确率,即被正确分类的图像的比率。
# 设置优化器
opt = tf.keras.optimizers.Adam(learning_rate=1e-4)

model.compile(optimizer=opt,
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
              metrics=['accuracy'])

九、训练模型

epochs = 50

checkpointer = ModelCheckpoint('best_model.h5',
                                monitor='val_accuracy',
                                verbose=1,
                                save_best_only=True,
                                save_weights_only=True)

history = model.fit(train_ds,
                    validation_data=val_ds,
                    epochs=epochs,
                    callbacks=[checkpointer])

运行结果:

Epoch 1/50
53/54 [============================>.] - ETA: 0s - loss: 0.6994 - accuracy: 0.5565
Epoch 1: val_accuracy improved from -inf to 0.56542, saving model to best_model.h5
54/54 [==============================] - 5s 45ms/step - loss: 0.6979 - accuracy: 0.5589 - val_loss: 0.6668 - val_accuracy: 0.5654
Epoch 2/50
53/54 [============================>.] - ETA: 0s - loss: 0.6505 - accuracy: 0.6183
Epoch 2: val_accuracy improved from 0.56542 to 0.66121, saving model to best_model.h5
54/54 [==============================] - 2s 35ms/step - loss: 0.6515 - accuracy: 0.6161 - val_loss: 0.6247 - val_accuracy: 0.6612
Epoch 3/50
54/54 [==============================] - ETA: 0s - loss: 0.6200 - accuracy: 0.6645
Epoch 3: val_accuracy improved from 0.66121 to 0.66589, saving model to best_model.h5
54/54 [==============================] - 2s 34ms/step - loss: 0.6200 - accuracy: 0.6645 - val_loss: 0.6081 - val_accuracy: 0.6659
Epoch 4/50
54/54 [==============================] - ETA: 0s - loss: 0.5851 - accuracy: 0.6890
Epoch 4: val_accuracy improved from 0.66589 to 0.69626, saving model to best_model.h5
54/54 [==============================] - 2s 34ms/step - loss: 0.5851 - accuracy: 0.6890 - val_loss: 0.6004 - val_accuracy: 0.6963
Epoch 5/50
54/54 [==============================] - ETA: 0s - loss: 0.5654 - accuracy: 0.7141
Epoch 5: val_accuracy improved from 0.69626 to 0.69860, saving model to best_model.h5
54/54 [==============================] - 2s 33ms/step - loss: 0.5654 - accuracy: 0.7141 - val_loss: 0.5868 - val_accuracy: 0.6986
Epoch 6/50
54/54 [==============================] - ETA: 0s - loss: 0.5420 - accuracy: 0.7264
Epoch 6: val_accuracy improved from 0.69860 to 0.72196, saving model to best_model.h5
54/54 [==============================] - 2s 33ms/step - loss: 0.5420 - accuracy: 0.7264 - val_loss: 0.5344 - val_accuracy: 0.7220
Epoch 7/50
54/54 [==============================] - ETA: 0s - loss: 0.5008 - accuracy: 0.7579
Epoch 7: val_accuracy improved from 0.72196 to 0.78037, saving model to best_model.h5
54/54 [==============================] - 2s 34ms/step - loss: 0.5008 - accuracy: 0.7579 - val_loss: 0.4713 - val_accuracy: 0.7804
Epoch 8/50
54/54 [==============================] - ETA: 0s - loss: 0.4547 - accuracy: 0.7917
Epoch 8: val_accuracy did not improve from 0.78037
54/54 [==============================] - 2s 32ms/step - loss: 0.4547 - accuracy: 0.7917 - val_loss: 0.5349 - val_accuracy: 0.7290
Epoch 9/50
54/54 [==============================] - ETA: 0s - loss: 0.4494 - accuracy: 0.8022
Epoch 9: val_accuracy did not improve from 0.78037
54/54 [==============================] - 2s 32ms/step - loss: 0.4494 - accuracy: 0.8022 - val_loss: 0.6015 - val_accuracy: 0.6963
Epoch 10/50
54/54 [==============================] - ETA: 0s - loss: 0.4314 - accuracy: 0.8022
Epoch 10: val_accuracy improved from 0.78037 to 0.79206, saving model to best_model.h5
54/54 [==============================] - 2s 34ms/step - loss: 0.4314 - accuracy: 0.8022 - val_loss: 0.4566 - val_accuracy: 0.7921
Epoch 11/50
54/54 [==============================] - ETA: 0s - loss: 0.4082 - accuracy: 0.8238
Epoch 11: val_accuracy improved from 0.79206 to 0.83178, saving model to best_model.h5
54/54 [==============================] - 2s 34ms/step - loss: 0.4082 - accuracy: 0.8238 - val_loss: 0.4094 - val_accuracy: 0.8318
Epoch 12/50
54/54 [==============================] - ETA: 0s - loss: 0.3629 - accuracy: 0.8600
Epoch 12: val_accuracy did not improve from 0.83178
54/54 [==============================] - 2s 32ms/step - loss: 0.3629 - accuracy: 0.8600 - val_loss: 0.4206 - val_accuracy: 0.8084
Epoch 13/50
54/54 [==============================] - ETA: 0s - loss: 0.3581 - accuracy: 0.8588
Epoch 13: val_accuracy did not improve from 0.83178
54/54 [==============================] - 2s 33ms/step - loss: 0.3581 - accuracy: 0.8588 - val_loss: 0.4083 - val_accuracy: 0.8248
Epoch 14/50
53/54 [============================>.] - ETA: 0s - loss: 0.3344 - accuracy: 0.8662
Epoch 14: val_accuracy improved from 0.83178 to 0.84346, saving model to best_model.h5
54/54 [==============================] - 2s 34ms/step - loss: 0.3323 - accuracy: 0.8676 - val_loss: 0.4050 - val_accuracy: 0.8435
Epoch 15/50
54/54 [==============================] - ETA: 0s - loss: 0.3439 - accuracy: 0.8582
Epoch 15: val_accuracy did not improve from 0.84346
54/54 [==============================] - 2s 32ms/step - loss: 0.3439 - accuracy: 0.8582 - val_loss: 0.4119 - val_accuracy: 0.8201
Epoch 16/50
54/54 [==============================] - ETA: 0s - loss: 0.3231 - accuracy: 0.8664
Epoch 16: val_accuracy did not improve from 0.84346
54/54 [==============================] - 2s 32ms/step - loss: 0.3231 - accuracy: 0.8664 - val_loss: 0.4034 - val_accuracy: 0.8435
Epoch 17/50
54/54 [==============================] - ETA: 0s - loss: 0.3195 - accuracy: 0.8711
Epoch 17: val_accuracy did not improve from 0.84346
54/54 [==============================] - 2s 32ms/step - loss: 0.3195 - accuracy: 0.8711 - val_loss: 0.3968 - val_accuracy: 0.8248
Epoch 18/50
54/54 [==============================] - ETA: 0s - loss: 0.3078 - accuracy: 0.8798
Epoch 18: val_accuracy did not improve from 0.84346
54/54 [==============================] - 2s 32ms/step - loss: 0.3078 - accuracy: 0.8798 - val_loss: 0.3903 - val_accuracy: 0.8341
Epoch 19/50
54/54 [==============================] - ETA: 0s - loss: 0.2744 - accuracy: 0.8991
Epoch 19: val_accuracy improved from 0.84346 to 0.85047, saving model to best_model.h5
54/54 [==============================] - 2s 34ms/step - loss: 0.2744 - accuracy: 0.8991 - val_loss: 0.3752 - val_accuracy: 0.8505
Epoch 20/50
54/54 [==============================] - ETA: 0s - loss: 0.2753 - accuracy: 0.8950
Epoch 20: val_accuracy improved from 0.85047 to 0.85280, saving model to best_model.h5
54/54 [==============================] - 2s 34ms/step - loss: 0.2753 - accuracy: 0.8950 - val_loss: 0.3759 - val_accuracy: 0.8528
Epoch 21/50
54/54 [==============================] - ETA: 0s - loss: 0.2626 - accuracy: 0.8979
Epoch 21: val_accuracy did not improve from 0.85280
54/54 [==============================] - 2s 32ms/step - loss: 0.2626 - accuracy: 0.8979 - val_loss: 0.3768 - val_accuracy: 0.8505
Epoch 22/50
54/54 [==============================] - ETA: 0s - loss: 0.2551 - accuracy: 0.9032
Epoch 22: val_accuracy did not improve from 0.85280
54/54 [==============================] - 2s 33ms/step - loss: 0.2551 - accuracy: 0.9032 - val_loss: 0.3801 - val_accuracy: 0.8435
Epoch 23/50
54/54 [==============================] - ETA: 0s - loss: 0.2499 - accuracy: 0.9061
Epoch 23: val_accuracy did not improve from 0.85280
54/54 [==============================] - 2s 32ms/step - loss: 0.2499 - accuracy: 0.9061 - val_loss: 0.3955 - val_accuracy: 0.8341
Epoch 24/50
54/54 [==============================] - ETA: 0s - loss: 0.2675 - accuracy: 0.8862
Epoch 24: val_accuracy did not improve from 0.85280
54/54 [==============================] - 2s 32ms/step - loss: 0.2675 - accuracy: 0.8862 - val_loss: 0.4271 - val_accuracy: 0.8294
Epoch 25/50
54/54 [==============================] - ETA: 0s - loss: 0.2384 - accuracy: 0.9078
Epoch 25: val_accuracy did not improve from 0.85280
54/54 [==============================] - 2s 33ms/step - loss: 0.2384 - accuracy: 0.9078 - val_loss: 0.3613 - val_accuracy: 0.8458
Epoch 26/50
54/54 [==============================] - ETA: 0s - loss: 0.2310 - accuracy: 0.9113
Epoch 26: val_accuracy improved from 0.85280 to 0.85981, saving model to best_model.h5
54/54 [==============================] - 2s 34ms/step - loss: 0.2310 - accuracy: 0.9113 - val_loss: 0.3748 - val_accuracy: 0.8598
Epoch 27/50
54/54 [==============================] - ETA: 0s - loss: 0.2243 - accuracy: 0.9148
Epoch 27: val_accuracy did not improve from 0.85981
54/54 [==============================] - 2s 32ms/step - loss: 0.2243 - accuracy: 0.9148 - val_loss: 0.3649 - val_accuracy: 0.8481
Epoch 28/50
54/54 [==============================] - ETA: 0s - loss: 0.2139 - accuracy: 0.9148
Epoch 28: val_accuracy did not improve from 0.85981
54/54 [==============================] - 2s 32ms/step - loss: 0.2139 - accuracy: 0.9148 - val_loss: 0.3757 - val_accuracy: 0.8598
Epoch 29/50
54/54 [==============================] - ETA: 0s - loss: 0.1999 - accuracy: 0.9212
Epoch 29: val_accuracy did not improve from 0.85981
54/54 [==============================] - 2s 32ms/step - loss: 0.1999 - accuracy: 0.9212 - val_loss: 0.4333 - val_accuracy: 0.8388
Epoch 30/50
54/54 [==============================] - ETA: 0s - loss: 0.1960 - accuracy: 0.9265
Epoch 30: val_accuracy did not improve from 0.85981
54/54 [==============================] - 2s 32ms/step - loss: 0.1960 - accuracy: 0.9265 - val_loss: 0.4397 - val_accuracy: 0.8458
Epoch 31/50
54/54 [==============================] - ETA: 0s - loss: 0.1920 - accuracy: 0.9236
Epoch 31: val_accuracy improved from 0.85981 to 0.87617, saving model to best_model.h5
54/54 [==============================] - 2s 34ms/step - loss: 0.1920 - accuracy: 0.9236 - val_loss: 0.4002 - val_accuracy: 0.8762
Epoch 32/50
53/54 [============================>.] - ETA: 0s - loss: 0.2003 - accuracy: 0.9239
Epoch 32: val_accuracy did not improve from 0.87617
54/54 [==============================] - 2s 35ms/step - loss: 0.1999 - accuracy: 0.9242 - val_loss: 0.3830 - val_accuracy: 0.8551
Epoch 33/50
53/54 [============================>.] - ETA: 0s - loss: 0.2080 - accuracy: 0.9150
Epoch 33: val_accuracy did not improve from 0.87617
54/54 [==============================] - 2s 34ms/step - loss: 0.2092 - accuracy: 0.9154 - val_loss: 0.3658 - val_accuracy: 0.8692
Epoch 34/50
54/54 [==============================] - ETA: 0s - loss: 0.1671 - accuracy: 0.9370
Epoch 34: val_accuracy did not improve from 0.87617
54/54 [==============================] - 2s 33ms/step - loss: 0.1671 - accuracy: 0.9370 - val_loss: 0.4128 - val_accuracy: 0.8505
Epoch 35/50
53/54 [============================>.] - ETA: 0s - loss: 0.1658 - accuracy: 0.9447
Epoch 35: val_accuracy did not improve from 0.87617
54/54 [==============================] - 2s 33ms/step - loss: 0.1647 - accuracy: 0.9452 - val_loss: 0.3905 - val_accuracy: 0.8715
Epoch 36/50
54/54 [==============================] - ETA: 0s - loss: 0.1532 - accuracy: 0.9446
Epoch 36: val_accuracy did not improve from 0.87617
54/54 [==============================] - 2s 33ms/step - loss: 0.1532 - accuracy: 0.9446 - val_loss: 0.4140 - val_accuracy: 0.8505
Epoch 37/50
54/54 [==============================] - ETA: 0s - loss: 0.1581 - accuracy: 0.9440
Epoch 37: val_accuracy did not improve from 0.87617
54/54 [==============================] - 2s 33ms/step - loss: 0.1581 - accuracy: 0.9440 - val_loss: 0.4392 - val_accuracy: 0.8481
Epoch 38/50
54/54 [==============================] - ETA: 0s - loss: 0.1557 - accuracy: 0.9434
Epoch 38: val_accuracy did not improve from 0.87617
54/54 [==============================] - 2s 33ms/step - loss: 0.1557 - accuracy: 0.9434 - val_loss: 0.4055 - val_accuracy: 0.8645
Epoch 39/50
54/54 [==============================] - ETA: 0s - loss: 0.1456 - accuracy: 0.9440
Epoch 39: val_accuracy did not improve from 0.87617
54/54 [==============================] - 2s 32ms/step - loss: 0.1456 - accuracy: 0.9440 - val_loss: 0.4077 - val_accuracy: 0.8668
Epoch 40/50
54/54 [==============================] - ETA: 0s - loss: 0.1263 - accuracy: 0.9545
Epoch 40: val_accuracy did not improve from 0.87617
54/54 [==============================] - 2s 33ms/step - loss: 0.1263 - accuracy: 0.9545 - val_loss: 0.4265 - val_accuracy: 0.8668
Epoch 41/50
54/54 [==============================] - ETA: 0s - loss: 0.1350 - accuracy: 0.9504
Epoch 41: val_accuracy did not improve from 0.87617
54/54 [==============================] - 2s 32ms/step - loss: 0.1350 - accuracy: 0.9504 - val_loss: 0.4233 - val_accuracy: 0.8621
Epoch 42/50
54/54 [==============================] - ETA: 0s - loss: 0.1284 - accuracy: 0.9557
Epoch 42: val_accuracy did not improve from 0.87617
54/54 [==============================] - 2s 33ms/step - loss: 0.1284 - accuracy: 0.9557 - val_loss: 0.4191 - val_accuracy: 0.8598
Epoch 43/50
54/54 [==============================] - ETA: 0s - loss: 0.1303 - accuracy: 0.9539
Epoch 43: val_accuracy did not improve from 0.87617
54/54 [==============================] - 2s 32ms/step - loss: 0.1303 - accuracy: 0.9539 - val_loss: 0.4148 - val_accuracy: 0.8715
Epoch 44/50
54/54 [==============================] - ETA: 0s - loss: 0.1182 - accuracy: 0.9621
Epoch 44: val_accuracy improved from 0.87617 to 0.88084, saving model to best_model.h5
54/54 [==============================] - 2s 34ms/step - loss: 0.1182 - accuracy: 0.9621 - val_loss: 0.4213 - val_accuracy: 0.8808
Epoch 45/50
53/54 [============================>.] - ETA: 0s - loss: 0.1203 - accuracy: 0.9643
Epoch 45: val_accuracy did not improve from 0.88084
54/54 [==============================] - 2s 34ms/step - loss: 0.1215 - accuracy: 0.9638 - val_loss: 0.4287 - val_accuracy: 0.8762
Epoch 46/50
53/54 [============================>.] - ETA: 0s - loss: 0.1213 - accuracy: 0.9512
Epoch 46: val_accuracy did not improve from 0.88084
54/54 [==============================] - 2s 33ms/step - loss: 0.1208 - accuracy: 0.9516 - val_loss: 0.4137 - val_accuracy: 0.8738
Epoch 47/50
54/54 [==============================] - ETA: 0s - loss: 0.0948 - accuracy: 0.9714
Epoch 47: val_accuracy did not improve from 0.88084
54/54 [==============================] - 2s 33ms/step - loss: 0.0948 - accuracy: 0.9714 - val_loss: 0.4277 - val_accuracy: 0.8692
Epoch 48/50
54/54 [==============================] - ETA: 0s - loss: 0.0960 - accuracy: 0.9656
Epoch 48: val_accuracy did not improve from 0.88084
54/54 [==============================] - 2s 33ms/step - loss: 0.0960 - accuracy: 0.9656 - val_loss: 0.4341 - val_accuracy: 0.8762
Epoch 49/50
53/54 [============================>.] - ETA: 0s - loss: 0.0926 - accuracy: 0.9685
Epoch 49: val_accuracy did not improve from 0.88084
54/54 [==============================] - 2s 33ms/step - loss: 0.0937 - accuracy: 0.9685 - val_loss: 0.4937 - val_accuracy: 0.8621
Epoch 50/50
53/54 [============================>.] - ETA: 0s - loss: 0.0778 - accuracy: 0.9804
Epoch 50: val_accuracy did not improve from 0.88084
54/54 [==============================] - 2s 33ms/step - loss: 0.0790 - accuracy: 0.9796 - val_loss: 0.4538 - val_accuracy: 0.8715

 十、模型评估

acc = history.history['accuracy']
val_acc = history.history['val_accuracy']

loss = history.history['loss']
val_loss = history.history['val_loss']

epochs_range = range(epochs)

plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')

plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()

十一、指定图片预测

# 加载效果最好的模型权重
model.load_weights('best_model.h5')

img = Image.open("./data/Others/NM15_02_11.jpg")  # 这里选择你需要预测的图片
img = img.resize((img_width, img_height))  # 调整图像大小
img_array = np.array(img)  # 转换为 NumPy 数组
img_array = tf.expand_dims(img_array, 0)  # 添加批次维度
predictions = model.predict(img_array)  # 这里选用你已经训练好的模型

print("预测结果为:", class_names[np.argmax(predictions)])

运行结果:

预测结果为: Others

十二、总结

   本周通过学习TensorFlow实现猴痘病识别,学习如何增加测试集准确率,通过添加增加dropout层来实现。其次是学习如何找到最好模型、保存模型、加载模型。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值