深度学习第四周 tensorflow实现猴痘识别

一、本周学习内容:

1、使用tensorflow中的回调函数,保存训练的最优模型

from keras.callbacks import ModelCheckpoint
# mointor='val_accuracy'我使用这个验证集准确率时,
#训练时出现Can save best model only with val_accuracy available, skipping.使用val_acc时正常
checkpointer = ModelCheckpoint(
    filepath='models_weights.h5', #模型保存位置
    monitor='val_acc',   # 监测参数
    verbose=1,      # 0为不输出日志信息,1为输出进度条记录,2为每个epoch输出一条记录
    save_best_only=True,  # 仅保存最优模型
    save_weights_only= True # 仅保存权重文件 注意如果仅保存权重模型,还需要自己搭建原模型对应的网络,但是仅保存权重模型模型文件更小
)

二、前言

我们的猴痘图片共有2142张,两个类别,类别即为文件夹名。
类别包括:[‘Monkeypox’, ‘Others’]
在我们使用tensorflow.keras.preprocessing.image_dataset_from_directory()函数读取后会弹出下列提示
Found 2142 files belonging to 2 classes.
发现2142个文件2个类别
Using 428 files for validation.
使用428个文件作为验证集

三、电脑环境

电脑系统:Windows 10
语言环境:Python 3.8.8
编译器:Pycharm 2021.1.3
深度学习环境:TensorFlow 2.8.0,keras 2.8.0
显卡及显存:RTX 3070 8G

四、前期准备

1、导入相关依赖项

from keras.models import Sequential
from keras.layers import *
from tensorflow import keras
from keras.callbacks import ModelCheckpoint
import tensorflow as tf
import matplotlib.pyplot as plt

2、设置GPU(我下载的tensorflow-gpu 默认使用GPU)

只使用GPU

if gpus:
    gpu0 = gpus[0]                                        #如果有多个GPU,仅使用第0个GPU
    tf.config.experimental.set_memory_growth(gpu0, True)  #设置GPU显存用量按需使用
    tf.config.set_visible_devices([gpu0],"GPU")

使用cpu和gpu
os.environ[“CUDA_VISIBLE_DEVICES”] = “-1”

3、加载数据集和展示

(1)、数据集加载

# 本周数据集加载与上周相同
datadir = ".\第4周"

train = keras.preprocessing.image_dataset_from_directory(directory=datadir,
                                                           validation_split=0.2,
                                                           subset='training',
                                                           seed=123,
                                                           image_size=(224,224),
                                                           batch_size=32)
val = keras.preprocessing.image_dataset_from_directory(directory=datadir,
                                                           validation_split=0.2,
                                                           subset='validation',
                                                           seed=123,
                                                           image_size=(224,224),
                                                           batch_size=32)
class_names = train.class_names  #获取数据集中包含的类别
print(class_names)

(2)、数据展示

图片展示(与上周相同)

# 图片展示
plt.figure(figsize=(20, 5))  # 创建一个画布,画布大小为宽20、高5(单位为英寸inch)
for images,labels in train.take(1):
    for i in range(20):
     # 将整个画布分成2行10列,绘制第i+1个子图。
       plt.subplot(2, 10, i+1)
       plt.imshow(images[i].numpy().astype("uint8"), cmap=plt.cm.binary)
       plt.title(class_names[labels[i]])
       plt.axis('off')

plt.show()  #使用pycharm的需要加入这行代码才能将图像显示出来
# train.take(i) 取出第i组图片数据和标签,即将所有图片打包后一个batch为32,
# 所以i最大值为1714/32=54组

由于图片数据集展示这次有点特殊,就不进行展示

五、数据预处理

由于我们使用的是tensorflow.keras.preprocessing.image_dataset_from_directory()函数加载的数据集和之前的不一样,我们需要更改第一层网络层对数据进行预处理,但是注意,这个函数读取后标签并不是热编码格式
标签格式为:
tf.Tensor([1 1 1 0 0 1 1 0 0 1 1 1 0 0 0 1 0 1 0 1 0 1 0 1 1 0 1 0 0 0 0 1], shape=(32,), dtype=int32)
使用数字 0,1表示各个类别

# 数据预处理
tf.keras.layers.experimental.preprocessing.Rescaling(1./255,input_shape=(224,224,3))

六、搭建CNN网络

除网络层第一层和最后一层有所更改,其余和前两周大致相同

# 网络模型
model = Sequential([
    tf.keras.layers.experimental.preprocessing.Rescaling(1./255,input_shape=(224,224,3)),  #图片预处理将像素值转化到0-1之间
    Conv2D(filters=32,kernel_size=3,activation='relu',input_shape=(224,224,3)),
    MaxPool2D((2,2)),
    Conv2D(filters=64,kernel_size=3,activation='relu'),
    MaxPool2D((2,2)),
    Flatten(),
    Dense(64,activation='relu'),
    Dense(len(class_names))  #因为不是热编码了 去除softmax层
])
# 设置优化器相关
model.compile(optimizer=keras.optimizers.Adam(learning_rate=0.001),loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),metrics=['acc'])
# 训练函数里需加入回调函数
# mointor='val_accuracy'我使用这个验证集准确率时,
#训练时出现Can save best model only with val_accuracy available, skipping.使用val_acc时正常
checkpointer = ModelCheckpoint(
    filepath='models_weights.h5', #模型保存位置
    monitor='val_acc',   # 监测参数 我们监测验证集的准确率  
    verbose=1,      # 0为不输出日志信息,1为输出进度条记录,2为每个epoch输出一条记录
    save_best_only=True,  # 仅保存最优模型
    save_weights_only= True # 仅保存权重文件 注意如果仅保存权重模型,还需要自己搭建原模型对应的网络
)
history = model.fit(train,validation_data=val,epochs=20,verbose=1,
		  callbacks=[checkpointer])
evaluate = model.evaluate(val)
print(evaluate)
Epoch 1/20
54/54 [==============================] - ETA: 0s - loss: 0.9504 - acc: 0.5648
Epoch 1: val_acc improved from -inf to 0.67523, saving model to models_weights.h5
54/54 [==============================] - 6s 49ms/step - loss: 0.9504 - acc: 0.5648 - val_loss: 0.6060 - val_acc: 0.6752
Epoch 2/20
53/54 [============================>.] - ETA: 0s - loss: 0.5618 - acc: 0.6940
Epoch 2: val_acc improved from 0.67523 to 0.74065, saving model to models_weights.h5
54/54 [==============================] - 3s 49ms/step - loss: 0.5632 - acc: 0.6949 - val_loss: 0.5193 - val_acc: 0.7407
Epoch 3/20
53/54 [============================>.] - ETA: 0s - loss: 0.4335 - acc: 0.7936
Epoch 3: val_acc improved from 0.74065 to 0.77804, saving model to models_weights.h5
54/54 [==============================] - 2s 44ms/step - loss: 0.4310 - acc: 0.7952 - val_loss: 0.4503 - val_acc: 0.7780
Epoch 4/20
53/54 [============================>.] - ETA: 0s - loss: 0.3446 - acc: 0.8561
Epoch 4: val_acc improved from 0.77804 to 0.81542, saving model to models_weights.h5
54/54 [==============================] - 3s 51ms/step - loss: 0.3424 - acc: 0.8576 - val_loss: 0.4308 - val_acc: 0.8154
Epoch 5/20
53/54 [============================>.] - ETA: 0s - loss: 0.3017 - acc: 0.8821
Epoch 5: val_acc did not improve from 0.81542
54/54 [==============================] - 2s 36ms/step - loss: 0.3002 - acc: 0.8827 - val_loss: 0.5169 - val_acc: 0.7897
Epoch 6/20
53/54 [============================>.] - ETA: 0s - loss: 0.2672 - acc: 0.9009
Epoch 6: val_acc did not improve from 0.81542
54/54 [==============================] - 2s 35ms/step - loss: 0.2660 - acc: 0.9014 - val_loss: 0.4567 - val_acc: 0.7944
Epoch 7/20
53/54 [============================>.] - ETA: 0s - loss: 0.1579 - acc: 0.9369
Epoch 7: val_acc improved from 0.81542 to 0.87617, saving model to models_weights.h5
54/54 [==============================] - 2s 43ms/step - loss: 0.1584 - acc: 0.9370 - val_loss: 0.4475 - val_acc: 0.8762
Epoch 8/20
53/54 [============================>.] - ETA: 0s - loss: 0.1501 - acc: 0.9517
Epoch 8: val_acc did not improve from 0.87617
54/54 [==============================] - 2s 36ms/step - loss: 0.1492 - acc: 0.9522 - val_loss: 0.5837 - val_acc: 0.8411
Epoch 9/20
53/54 [============================>.] - ETA: 0s - loss: 0.1324 - acc: 0.9511
Epoch 9: val_acc did not improve from 0.87617
54/54 [==============================] - 2s 36ms/step - loss: 0.1319 - acc: 0.9510 - val_loss: 0.5143 - val_acc: 0.8318
Epoch 10/20
53/54 [============================>.] - ETA: 0s - loss: 0.0921 - acc: 0.9688
Epoch 10: val_acc did not improve from 0.87617
54/54 [==============================] - 2s 37ms/step - loss: 0.0922 - acc: 0.9685 - val_loss: 0.5655 - val_acc: 0.8738
Epoch 11/20
52/54 [===========================>..] - ETA: 0s - loss: 0.0872 - acc: 0.9681
Epoch 11: val_acc did not improve from 0.87617
54/54 [==============================] - 2s 37ms/step - loss: 0.0862 - acc: 0.9685 - val_loss: 0.6781 - val_acc: 0.8551
Epoch 12/20
53/54 [============================>.] - ETA: 0s - loss: 0.0880 - acc: 0.9717
Epoch 12: val_acc did not improve from 0.87617
54/54 [==============================] - 2s 36ms/step - loss: 0.0872 - acc: 0.9720 - val_loss: 0.5794 - val_acc: 0.8715
Epoch 13/20
53/54 [============================>.] - ETA: 0s - loss: 0.0992 - acc: 0.9670
Epoch 13: val_acc did not improve from 0.87617
54/54 [==============================] - 2s 38ms/step - loss: 0.0992 - acc: 0.9667 - val_loss: 0.5335 - val_acc: 0.8668
Epoch 14/20
53/54 [============================>.] - ETA: 0s - loss: 0.1011 - acc: 0.9623
Epoch 14: val_acc did not improve from 0.87617
54/54 [==============================] - 2s 36ms/step - loss: 0.1006 - acc: 0.9627 - val_loss: 0.5108 - val_acc: 0.8738
Epoch 15/20
53/54 [============================>.] - ETA: 0s - loss: 0.0663 - acc: 0.9788
Epoch 15: val_acc did not improve from 0.87617
54/54 [==============================] - 2s 35ms/step - loss: 0.0681 - acc: 0.9778 - val_loss: 0.7289 - val_acc: 0.8271
Epoch 16/20
53/54 [============================>.] - ETA: 0s - loss: 0.1525 - acc: 0.9493
Epoch 16: val_acc improved from 0.87617 to 0.88318, saving model to models_weights.h5
54/54 [==============================] - 3s 56ms/step - loss: 0.1518 - acc: 0.9498 - val_loss: 0.4598 - val_acc: 0.8832
Epoch 17/20
53/54 [============================>.] - ETA: 0s - loss: 0.0591 - acc: 0.9805
Epoch 17: val_acc did not improve from 0.88318
54/54 [==============================] - 2s 36ms/step - loss: 0.0589 - acc: 0.9807 - val_loss: 0.7397 - val_acc: 0.8692
Epoch 18/20
53/54 [============================>.] - ETA: 0s - loss: 0.0518 - acc: 0.9835
Epoch 18: val_acc did not improve from 0.88318
54/54 [==============================] - 2s 35ms/step - loss: 0.0516 - acc: 0.9837 - val_loss: 0.5769 - val_acc: 0.8715
Epoch 19/20
53/54 [============================>.] - ETA: 0s - loss: 0.0505 - acc: 0.9876
Epoch 19: val_acc did not improve from 0.88318
54/54 [==============================] - 2s 35ms/step - loss: 0.0506 - acc: 0.9877 - val_loss: 0.6275 - val_acc: 0.8785
Epoch 20/20
53/54 [============================>.] - ETA: 0s - loss: 0.0662 - acc: 0.9794
Epoch 20: val_acc did not improve from 0.88318
54/54 [==============================] - 2s 36ms/step - loss: 0.0667 - acc: 0.9790 - val_loss: 0.6822 - val_acc: 0.8692
14/14 [==============================] - 0s 16ms/step - loss: 0.6822 - acc: 0.8692
[0.6822198629379272, 0.8691588640213013]

七、绘制损失函数图像和准确度图像

绘制代码与前几周相同

# 画准确度图
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(20)
plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()

在这里插入图片描述

八、加载模型进行验证

from keras.models import Sequential
from keras.layers import *
from tensorflow import keras
import tensorflow as tf
import numpy as np
import cv2

class_name=['Monkeypox', 'Others']
model = Sequential([
    tf.keras.layers.experimental.preprocessing.Rescaling(1./255,input_shape=(224,224,3)),
    Conv2D(filters=32,kernel_size=3,activation='relu',input_shape=(224,224,3)),
    MaxPool2D((2,2)),
    Conv2D(filters=64,kernel_size=3,activation='relu'),
    MaxPool2D((2,2)),
    Flatten(),
    Dense(64,activation='relu'),
    Dense(len(class_name))
])


model.load_weights('./models_weights.h5')
# 我们各抽取每个类别的三张图片进行验证
filepath= ["./第4周/Monkeypox/M01_01_00.jpg","./第4周/Monkeypox/M01_01_11.jpg","./第4周/Monkeypox/M01_02_01.jpg",
           "./第4周/Others/NM01_01_00.jpg","./第4周/Others/NM01_01_11.jpg","./第4周/Others/NM05_01_01.jpg"]
# 由于opencv读取图片路径不能存在中文,使用使用numpy进行读取,opencv进行转换
for path in filepath:
    img = cv2.imdecode(np.fromfile(path, dtype=np.uint8), 1).reshape(1,224,224,3)/255.
    predict = model.predict(img)
    print(predict)
    print(class_name[np.argmax(predict)])


以上就是我本周的学习内容
在这里插入图片描述

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

降花绘

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值