【tensorflow】T4.猴痘病识别

>- **🍨 本文为[🔗365天深度学习训练营](https://mp.weixin.qq.com/s/0dvHCaOoFnW8SCp3JpzKxg) 中的学习记录博客**
>- **🍖 原作者:[K同学啊](https://mtyjkh.blog.csdn.net/)**

一、数据导入和检查
import tensorflow as tf
from tensorflow.keras import layers, models
import os, pathlib, PIL
import matplotlib.pyplot as plt
import numpy as np

data_dir = "/kaggle/input/mpdataset"
data_dir = pathlib.Path(data_dir)

image_count = len(list(data_dir.glob("*/*.jpg")))
print(image_count)

monkeypox_1 = list(data_dir.glob("Monkeypox/*.jpg"))
PIL.Image.open(monkeypox_1[0])

out[1]:

2142

(随机开了个恶心的🤮)

二、数据预处理:
batch_size = 32
image_height = 224
image_width = 224

train_ds = tf.keras.preprocessing.image_dataset_from_directory(
    data_dir,
    validation_split = 0.2,
    subset = "training",
    seed = 123,
    image_size = (image_height, image_width),
    batch_size = batch_size
    
    )

val_ds = tf.keras.preprocessing.image_dataset_from_directory(
    data_dir,
    validation_split = 0.2,
    subset = "validation",
    seed = 123,
    image_size = (image_height, image_width),    
    batch_size = batch_size
    
    )

class_names = train_ds.class_names
print(class_names)

out[2]:

Found 2142 files belonging to 2 classes.

Using 1714 files for training.

Found 2142 files belonging to 2 classes. Using 428 files for validation.

['Monkeypox', 'Others']

三、数据可视化和检查数据:
plt.figure(figsize=(20, 10))

for images, labels in train_ds.take(1):
    
    for i in range(20):
        
        plt.subplot(5, 10, i+1)
        plt.imshow(images[i].numpy().astype("uint8"))
        plt.title(class_names[labels[i]])
        
        plt.axis("off")
        
for image_batch, labels_batch in train_ds:
    print(image_batch.shape)
    print(labels_batch.shape)
    break

out[3]:

(32, 224, 224, 3)
(32,)
四、配置数据:
AUTOTUNE = tf.data.AUTOTUNE

train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size = AUTOTUNE)
test_ds = val_ds.cache().prefetch(buffer_size = AUTOTUNE)
五、搭建模型:
num_classes = 2

model = models.Sequential([
    layers.Rescaling(1./255, input_shape = (image_height, image_width, 3)),
    
    layers.Conv2D(16, (3, 3), activation = "relu", input_shape = (image_height, image_width, 3)),
    layers.AveragePooling2D((2, 2)),
    layers.Conv2D(32, (3, 3), activation = "relu"),
    layers.AveragePooling2D((2, 2)),
    layers.Dropout(0.3),
    layers.Conv2D(64, (3, 3), activation = "relu"),
    layers.Dropout(0.3),
    
    layers.Flatten(),
    layers.Dense(128, activation = "relu"),
    layers.Dense(num_classes)
    
    ])

model.summary()

输出summary:

六、编译:
opt = tf.keras.optimizers.Adam(learning_rate = 1e-4)

model.compile(
    optimizer = opt,
    loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits = True),
    metrics = ["accuracy"]
    
    )
七、训练模型:
checkpointer = tf.keras.callbacks.ModelCheckpoint(
    "val_acc_best_model.weights.h5",
    monitor = "val_accuracy",
    verbose = 1,
    save_best_only = True,
    save_weights_only = True
    
    )

epochs = 50

history = model.fit(
    train_ds,
    validation_data = test_ds,
    epochs = 50,
    callbacks = [checkpointer]
    
    )

输出结果: 

Epoch 1: val_accuracy improved from -inf to 0.54206, saving model to val_acc_best_model.weights.h5
54/54 ━━━━━━━━━━━━━━━━━━━━ 22s 225ms/step - accuracy: 0.5250 - loss: 0.9477 - val_accuracy: 0.5421 - val_loss: 0.6760
Epoch 2/50
52/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.5527 - loss: 0.6760
Epoch 2: val_accuracy improved from 0.54206 to 0.56075, saving model to val_acc_best_model.weights.h5
54/54 ━━━━━━━━━━━━━━━━━━━━ 2s 32ms/step - accuracy: 0.5530 - loss: 0.6758 - val_accuracy: 0.5607 - val_loss: 0.6698
Epoch 3/50
53/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.5769 - loss: 0.6644
Epoch 3: val_accuracy improved from 0.56075 to 0.63785, saving model to val_acc_best_model.weights.h5
54/54 ━━━━━━━━━━━━━━━━━━━━ 2s 32ms/step - accuracy: 0.5770 - loss: 0.6645 - val_accuracy: 0.6379 - val_loss: 0.6539
Epoch 4/50
54/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.6187 - loss: 0.6547
Epoch 4: val_accuracy did not improve from 0.63785
54/54 ━━━━━━━━━━━━━━━━━━━━ 1s 18ms/step - accuracy: 0.6187 - loss: 0.6547 - val_accuracy: 0.6285 - val_loss: 0.6395
Epoch 5/50
53/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.6369 - loss: 0.6366
Epoch 5: val_accuracy improved from 0.63785 to 0.66589, saving model to val_acc_best_model.weights.h5
54/54 ━━━━━━━━━━━━━━━━━━━━ 3s 54ms/step - accuracy: 0.6373 - loss: 0.6363 - val_accuracy: 0.6659 - val_loss: 0.6053
Epoch 6/50
52/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.6879 - loss: 0.5990
Epoch 6: val_accuracy improved from 0.66589 to 0.71028, saving model to val_acc_best_model.weights.h5
54/54 ━━━━━━━━━━━━━━━━━━━━ 2s 32ms/step - accuracy: 0.6872 - loss: 0.5990 - val_accuracy: 0.7103 - val_loss: 0.5891
Epoch 7/50
53/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.6993 - loss: 0.5846
Epoch 7: val_accuracy improved from 0.71028 to 0.73131, saving model to val_acc_best_model.weights.h5
54/54 ━━━━━━━━━━━━━━━━━━━━ 3s 54ms/step - accuracy: 0.6997 - loss: 0.5841 - val_accuracy: 0.7313 - val_loss: 0.5452
Epoch 8/50
52/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.7390 - loss: 0.5234
Epoch 8: val_accuracy improved from 0.73131 to 0.74299, saving model to val_acc_best_model.weights.h5
54/54 ━━━━━━━━━━━━━━━━━━━━ 2s 32ms/step - accuracy: 0.7383 - loss: 0.5239 - val_accuracy: 0.7430 - val_loss: 0.5217
Epoch 9/50
54/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.7528 - loss: 0.5218
Epoch 9: val_accuracy did not improve from 0.74299
54/54 ━━━━━━━━━━━━━━━━━━━━ 1s 18ms/step - accuracy: 0.7529 - loss: 0.5217 - val_accuracy: 0.7383 - val_loss: 0.5260
Epoch 10/50
53/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.7589 - loss: 0.4948
Epoch 10: val_accuracy did not improve from 0.74299
54/54 ━━━━━━━━━━━━━━━━━━━━ 1s 18ms/step - accuracy: 0.7592 - loss: 0.4942 - val_accuracy: 0.7383 - val_loss: 0.4946
Epoch 11/50
52/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.8012 - loss: 0.4337
Epoch 11: val_accuracy improved from 0.74299 to 0.76636, saving model to val_acc_best_model.weights.h5
54/54 ━━━━━━━━━━━━━━━━━━━━ 2s 32ms/step - accuracy: 0.8008 - loss: 0.4343 - val_accuracy: 0.7664 - val_loss: 0.4687
Epoch 12/50
52/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.8158 - loss: 0.4127
Epoch 12: val_accuracy did not improve from 0.76636
54/54 ━━━━━━━━━━━━━━━━━━━━ 1s 18ms/step - accuracy: 0.8152 - loss: 0.4126 - val_accuracy: 0.7523 - val_loss: 0.4768
Epoch 13/50
52/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.8226 - loss: 0.4009
Epoch 13: val_accuracy improved from 0.76636 to 0.79907, saving model to val_acc_best_model.weights.h5
54/54 ━━━━━━━━━━━━━━━━━━━━ 2s 32ms/step - accuracy: 0.8233 - loss: 0.3999 - val_accuracy: 0.7991 - val_loss: 0.4415
Epoch 14/50
52/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.8671 - loss: 0.3246
Epoch 14: val_accuracy improved from 0.79907 to 0.81075, saving model to val_acc_best_model.weights.h5
54/54 ━━━━━━━━━━━━━━━━━━━━ 2s 31ms/step - accuracy: 0.8662 - loss: 0.3258 - val_accuracy: 0.8107 - val_loss: 0.4574
Epoch 15/50
54/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.8617 - loss: 0.3364
Epoch 15: val_accuracy did not improve from 0.81075
54/54 ━━━━━━━━━━━━━━━━━━━━ 1s 18ms/step - accuracy: 0.8612 - loss: 0.3367 - val_accuracy: 0.7897 - val_loss: 0.4779
Epoch 16/50
53/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.8723 - loss: 0.3229
Epoch 16: val_accuracy improved from 0.81075 to 0.82477, saving model to val_acc_best_model.weights.h5
54/54 ━━━━━━━━━━━━━━━━━━━━ 2s 41ms/step - accuracy: 0.8723 - loss: 0.3226 - val_accuracy: 0.8248 - val_loss: 0.4329
Epoch 17/50
53/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.8853 - loss: 0.2767
Epoch 17: val_accuracy did not improve from 0.82477
54/54 ━━━━━━━━━━━━━━━━━━━━ 1s 18ms/step - accuracy: 0.8852 - loss: 0.2772 - val_accuracy: 0.7850 - val_loss: 0.4768
Epoch 18/50
52/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.8823 - loss: 0.2843
Epoch 18: val_accuracy did not improve from 0.82477
54/54 ━━━━━━━━━━━━━━━━━━━━ 1s 18ms/step - accuracy: 0.8827 - loss: 0.2838 - val_accuracy: 0.8014 - val_loss: 0.4581
Epoch 19/50
53/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.8923 - loss: 0.2683
Epoch 19: val_accuracy improved from 0.82477 to 0.84112, saving model to val_acc_best_model.weights.h5
54/54 ━━━━━━━━━━━━━━━━━━━━ 2s 30ms/step - accuracy: 0.8925 - loss: 0.2679 - val_accuracy: 0.8411 - val_loss: 0.3734
Epoch 20/50
51/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.9174 - loss: 0.2177
Epoch 20: val_accuracy improved from 0.84112 to 0.84579, saving model to val_acc_best_model.weights.h5
54/54 ━━━━━━━━━━━━━━━━━━━━ 2s 31ms/step - accuracy: 0.9170 - loss: 0.2184 - val_accuracy: 0.8458 - val_loss: 0.3712
Epoch 21/50
53/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.9248 - loss: 0.2099
Epoch 21: val_accuracy improved from 0.84579 to 0.85514, saving model to val_acc_best_model.weights.h5
54/54 ━━━━━━━━━━━━━━━━━━━━ 2s 31ms/step - accuracy: 0.9247 - loss: 0.2098 - val_accuracy: 0.8551 - val_loss: 0.4105
Epoch 22/50
54/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.9279 - loss: 0.1985
Epoch 22: val_accuracy did not improve from 0.85514
54/54 ━━━━━━━━━━━━━━━━━━━━ 1s 18ms/step - accuracy: 0.9277 - loss: 0.1991 - val_accuracy: 0.8505 - val_loss: 0.3894
Epoch 23/50
51/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.9361 - loss: 0.1877
Epoch 23: val_accuracy did not improve from 0.85514
54/54 ━━━━━━━━━━━━━━━━━━━━ 1s 18ms/step - accuracy: 0.9357 - loss: 0.1875 - val_accuracy: 0.8364 - val_loss: 0.4128
Epoch 24/50
54/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.9339 - loss: 0.1915
Epoch 24: val_accuracy did not improve from 0.85514
54/54 ━━━━━━━━━━━━━━━━━━━━ 1s 18ms/step - accuracy: 0.9339 - loss: 0.1913 - val_accuracy: 0.8224 - val_loss: 0.4292
Epoch 25/50
52/54 ━━━━━━━━━━━━━━━━━━━━ 0s 17ms/step - accuracy: 0.9305 - loss: 0.1942
Epoch 25: val_accuracy improved from 0.85514 to 0.85748, saving model to val_acc_best_model.weights.h5
54/54 ━━━━━━━━━━━━━━━━━━━━ 2s 31ms/step - accuracy: 0.9304 - loss: 0.1940 - val_accuracy: 0.8575 - val_loss: 0.4174
Epoch 26/50
51/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.9307 - loss: 0.1659
Epoch 26: val_accuracy did not improve from 0.85748
54/54 ━━━━━━━━━━━━━━━━━━━━ 1s 18ms/step - accuracy: 0.9309 - loss: 0.1655 - val_accuracy: 0.8505 - val_loss: 0.3751
Epoch 27/50
52/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.9484 - loss: 0.1442
Epoch 27: val_accuracy improved from 0.85748 to 0.86449, saving model to val_acc_best_model.weights.h5
54/54 ━━━━━━━━━━━━━━━━━━━━ 2s 31ms/step - accuracy: 0.9481 - loss: 0.1449 - val_accuracy: 0.8645 - val_loss: 0.4082
Epoch 28/50
52/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.9482 - loss: 0.1396
Epoch 28: val_accuracy improved from 0.86449 to 0.87150, saving model to val_acc_best_model.weights.h5
54/54 ━━━━━━━━━━━━━━━━━━━━ 2s 31ms/step - accuracy: 0.9482 - loss: 0.1396 - val_accuracy: 0.8715 - val_loss: 0.3490
Epoch 29/50
51/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.9521 - loss: 0.1421
Epoch 29: val_accuracy did not improve from 0.87150
54/54 ━━━━━━━━━━━━━━━━━━━━ 1s 18ms/step - accuracy: 0.9520 - loss: 0.1421 - val_accuracy: 0.8528 - val_loss: 0.3904
Epoch 30/50
51/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.9675 - loss: 0.1093
Epoch 30: val_accuracy improved from 0.87150 to 0.87383, saving model to val_acc_best_model.weights.h5
54/54 ━━━━━━━━━━━━━━━━━━━━ 2s 31ms/step - accuracy: 0.9675 - loss: 0.1097 - val_accuracy: 0.8738 - val_loss: 0.4225
Epoch 31/50
52/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.9570 - loss: 0.1267
Epoch 31: val_accuracy did not improve from 0.87383
54/54 ━━━━━━━━━━━━━━━━━━━━ 1s 18ms/step - accuracy: 0.9572 - loss: 0.1263 - val_accuracy: 0.8668 - val_loss: 0.4402
Epoch 32/50
51/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.9528 - loss: 0.1187
Epoch 32: val_accuracy improved from 0.87383 to 0.87850, saving model to val_acc_best_model.weights.h5
54/54 ━━━━━━━━━━━━━━━━━━━━ 2s 31ms/step - accuracy: 0.9530 - loss: 0.1188 - val_accuracy: 0.8785 - val_loss: 0.3580
Epoch 33/50
53/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.9571 - loss: 0.1276
Epoch 33: val_accuracy did not improve from 0.87850
54/54 ━━━━━━━━━━━━━━━━━━━━ 1s 18ms/step - accuracy: 0.9570 - loss: 0.1274 - val_accuracy: 0.8715 - val_loss: 0.3967
Epoch 34/50
53/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.9696 - loss: 0.0978
Epoch 34: val_accuracy did not improve from 0.87850
54/54 ━━━━━━━━━━━━━━━━━━━━ 1s 18ms/step - accuracy: 0.9696 - loss: 0.0975 - val_accuracy: 0.8575 - val_loss: 0.4559
Epoch 35/50
53/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.9673 - loss: 0.1058
Epoch 35: val_accuracy did not improve from 0.87850
54/54 ━━━━━━━━━━━━━━━━━━━━ 1s 18ms/step - accuracy: 0.9671 - loss: 0.1062 - val_accuracy: 0.8645 - val_loss: 0.3885
Epoch 36/50
53/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.9715 - loss: 0.0888
Epoch 36: val_accuracy did not improve from 0.87850
54/54 ━━━━━━━━━━━━━━━━━━━━ 1s 18ms/step - accuracy: 0.9715 - loss: 0.0889 - val_accuracy: 0.8692 - val_loss: 0.4045
Epoch 37/50
52/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.9662 - loss: 0.0953
Epoch 37: val_accuracy did not improve from 0.87850
54/54 ━━━━━━━━━━━━━━━━━━━━ 1s 18ms/step - accuracy: 0.9661 - loss: 0.0953 - val_accuracy: 0.8692 - val_loss: 0.3900
Epoch 38/50
52/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.9816 - loss: 0.0827
Epoch 38: val_accuracy did not improve from 0.87850
54/54 ━━━━━━━━━━━━━━━━━━━━ 1s 18ms/step - accuracy: 0.9814 - loss: 0.0827 - val_accuracy: 0.8692 - val_loss: 0.3697
Epoch 39/50
52/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.9798 - loss: 0.0683
Epoch 39: val_accuracy did not improve from 0.87850
54/54 ━━━━━━━━━━━━━━━━━━━━ 1s 18ms/step - accuracy: 0.9797 - loss: 0.0684 - val_accuracy: 0.8575 - val_loss: 0.4088
Epoch 40/50
51/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.9807 - loss: 0.0667
Epoch 40: val_accuracy did not improve from 0.87850
54/54 ━━━━━━━━━━━━━━━━━━━━ 1s 18ms/step - accuracy: 0.9804 - loss: 0.0670 - val_accuracy: 0.8388 - val_loss: 0.5750
Epoch 41/50
52/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.9362 - loss: 0.2253
Epoch 41: val_accuracy did not improve from 0.87850
54/54 ━━━━━━━━━━━━━━━━━━━━ 1s 18ms/step - accuracy: 0.9350 - loss: 0.2259 - val_accuracy: 0.8551 - val_loss: 0.4077
Epoch 42/50
52/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.9653 - loss: 0.1128
Epoch 42: val_accuracy improved from 0.87850 to 0.88084, saving model to val_acc_best_model.weights.h5
54/54 ━━━━━━━━━━━━━━━━━━━━ 2s 31ms/step - accuracy: 0.9652 - loss: 0.1126 - val_accuracy: 0.8808 - val_loss: 0.3950
Epoch 43/50
53/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.9669 - loss: 0.0774
Epoch 43: val_accuracy did not improve from 0.88084
54/54 ━━━━━━━━━━━━━━━━━━━━ 1s 18ms/step - accuracy: 0.9673 - loss: 0.0771 - val_accuracy: 0.8715 - val_loss: 0.4158
Epoch 44/50
52/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.9905 - loss: 0.0481
Epoch 44: val_accuracy did not improve from 0.88084
54/54 ━━━━━━━━━━━━━━━━━━━━ 1s 18ms/step - accuracy: 0.9901 - loss: 0.0486 - val_accuracy: 0.8692 - val_loss: 0.4283
Epoch 45/50
51/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.9804 - loss: 0.0520
Epoch 45: val_accuracy did not improve from 0.88084
54/54 ━━━━━━━━━━━━━━━━━━━━ 1s 18ms/step - accuracy: 0.9807 - loss: 0.0523 - val_accuracy: 0.8668 - val_loss: 0.3909
Epoch 46/50
52/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.9916 - loss: 0.0462
Epoch 46: val_accuracy did not improve from 0.88084
54/54 ━━━━━━━━━━━━━━━━━━━━ 1s 18ms/step - accuracy: 0.9913 - loss: 0.0466 - val_accuracy: 0.8785 - val_loss: 0.4404
Epoch 47/50
51/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.9890 - loss: 0.0426
Epoch 47: val_accuracy did not improve from 0.88084
54/54 ━━━━━━━━━━━━━━━━━━━━ 1s 18ms/step - accuracy: 0.9891 - loss: 0.0428 - val_accuracy: 0.8762 - val_loss: 0.5029
Epoch 48/50
52/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.9891 - loss: 0.0440
Epoch 48: val_accuracy did not improve from 0.88084
54/54 ━━━━━━━━━━━━━━━━━━━━ 1s 18ms/step - accuracy: 0.9890 - loss: 0.0444 - val_accuracy: 0.8645 - val_loss: 0.4832
Epoch 49/50
52/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.9956 - loss: 0.0390
Epoch 49: val_accuracy did not improve from 0.88084
54/54 ━━━━━━━━━━━━━━━━━━━━ 1s 18ms/step - accuracy: 0.9954 - loss: 0.0391 - val_accuracy: 0.8738 - val_loss: 0.4241
Epoch 50/50
53/54 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.9887 - loss: 0.0399
Epoch 50: val_accuracy did not improve from 0.88084
54/54 ━━━━━━━━━━━━━━━━━━━━ 1s 18ms/step - accuracy: 0.9886 - loss: 0.0401 - val_accuracy: 0.8808 - val_loss: 0.4120
 
八、模型评估:
val_acc = history.history["val_accuracy"]
acc = history.history["accuracy"]

loss = history.history["loss"]
val_loss = history.history["val_loss"]

plt.figure(figsize=(12, 4))

x = range(epochs)

plt.subplot(1, 2, 1)
plt.plot(x, acc, label = "TrainAccuracy")
plt.plot(x, val_acc, label = "ValidationAccuracy")
plt.legend(loc = "lower right")
plt.title("Accuracy")

plt.subplot(1, 2, 2)
plt.plot(x, loss, label = "TrainLoss")
plt.plot(x, val_loss, label = "ValidationLoss")
plt.legend(loc = "lower right")
plt.title("Loss")

plt.show()
 
九、预测一张图片:
num = 1

img = "/kaggle/input/mpdataset/Others"
img = pathlib.Path(img)
image_count = img.glob("*.jpg")
image_open = list(image_count)

img = PIL.Image.open(image_open[num])
    
img = tf.image.resize(img, [image_height, image_width])

model.load_weights("/kaggle/working/val_acc_best_model.weights.h5")

ima_array = tf.expand_dims(img, 0)

pred = model.predict(ima_array)

print(class_names[np.argmax(pred)])

PIL.Image.open(image_open[num])
 
Others
 十、总结:

 这次学习加了cheakpointer,并且做这个的同时也做了pytorch框架的同类问题,感觉还是tensorflow简单一点,torch里五六行代码tf一行解决...但是封装程度高的框架确实不适合刚接触dl的初学者学习hhh,有利有弊吧,感觉两个代码一起对照学习还是很有收获的。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值