T5周:运动鞋品牌识别

本文介绍了在深度学习模型训练中如何设置动态学习率,特别是使用指数衰减策略,并应用早停(EarlyStopping)来优化模型性能。作者通过构建一个简单的CNN网络,展示了数据预处理、模型训练以及模型评估的过程,强调了在训练过程中监控验证集准确率和防止过拟合的重要性。
摘要由CSDN通过智能技术生成

Z. 心得感受+知识点补充

1. EarlyStopping()参数说明

  • monitor: 被监测的数据(这里我们用的是val_acc,验证集的精确度)
  • min_delta: 在被监测的数据中被认为是提升的最小变化,例如,小于min_delta的绝对变化会被认为没有提升
  • patience: 没有进步的训练轮数,在这之后训练就会被停止
  • verbose: 详细信息模式
  • mode: {auto, min, max}其中之一。在min模式中,当被监测的数据停止下降,训练就会停止;在max模式中,当被监测的数据停止上升,训练就会停止;在auto模式中,方向会自动从被监测的数据的名字中判断出来
  • baseline: 要监控的数量的基准值。如果模型没有显示基准的改善,训练将停止。
  • estore_best_weights: 是否从具有监测数量的最佳值的时期恢复模型权重。如果为False,则是否在训练的最后一步获得的模型权重

2. 学习率大或小的优缺点分析

(1)学习率大:

  • 优点:
    1、加快学习速率
    2、有助于跳出局部最优值
  • 缺点:
    1、导致模型训练不收敛
    2、单单使用大学习率容易导致模型不准确

(2)学习率小:

  • 优点:
    1、有助于模型收敛、模型细化
    2、提高模型精度
  • 缺点:
    1、很难跳出局部最优值
    2、收敛缓慢

PS:这里设置的动态学习率为:指数衰减型(ExponentialDecay)。公式为:
learning_rate = initial_learning_rate * decay_rate ^ (step / decay_steps)

一、前期工作

1. 设置GPU

from tensorflow import keras
from tensorflow.keras import layers, models
import os, PIL, pathlib
import matplotlib.pyplot as plt
import tensorflow as tf

gpus = tf.config.list_physical_devices('GPU')

if gpus:
    gpu0 = gpus[0] #如果有多个GPU,仅使用第0个GPU
    tf.config.experimental.set_memory_growth(gpu0, True) #设置GPU显存用量按需使用
    tf.config.set_visible_devices([gpu0], 'GPU')
    
gpus

2. 导入数据

data_dir = './data/'

data_dir = pathlib.Path(data_dir)
image_count = len(list(data_dir.glob('*/*/*.jpg')))

print("图片总数为:", image_count)

图片总数为: 578

3. 查看数据

roses = list(data_dir.glob('train/nike/*.jpg'))
PIL.Image.open(str(roses[0]))

在这里插入图片描述

二、数据预处理

1. 加载数据

batch_size = 32
img_height = 224
img_width = 224
#使用image_dataset_from_directory方法将磁盘中的数据加载到tf.data.Dataset中
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
    "./data/train/",
    seed = 123,
    image_size=(img_height, img_width),
    batch_size=batch_size
)

Found 502 files belonging to 2 classes.

#使用image_dataset_from_directory方法将磁盘中的数据加载到tf.data.Dataset中
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
    "./data/test/",
    seed = 123,
    image_size=(img_height, img_width),
    batch_size=batch_size
)

Found 76 files belonging to 2 classes.

#可以通过class_names输出数据集的标签,标签将按字母顺序对应于目录名称
class_names = train_ds.class_names
print(class_names)

[‘adidas’, ‘nike’]

2. 可视化数据

plt.figure(figsize=(20,10))
for images, labels in train_ds.take(1):
    for i in range(20):
    # 将整个figure分成5行10列,绘制第i+1个子图
        ax = plt.subplot(5, 10, i+1)
    #图像展示,cmap为颜色图谱,"plt.cm.binary为matplotlib.cm中的色表"
        plt.imshow(images[i].numpy().astype('uint8'))
    #设置x轴标签显示为图片对应的数字
        plt.title(class_names[labels[i]])
        
        plt.axis('off')

3. 再次检查数据

plt.figure(figsize=(20,10))
for images, labels in train_ds.take(1):
    for i in range(20):
    # 将整个figure分成5行10列,绘制第i+1个子图
        ax = plt.subplot(5, 10, i+1)
    #图像展示,cmap为颜色图谱,"plt.cm.binary为matplotlib.cm中的色表"
        plt.imshow(images[i].numpy().astype('uint8'))
    #设置x轴标签显示为图片对应的数字
        plt.title(class_names[labels[i]])
        
        plt.axis('off')

在这里插入图片描述

4. 配置数据集

AUTOTUNE = tf.data.AUTOTUNE

train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size = AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)

三、构建CNN网络模型

#创建并设置卷积神经网络

model = models.Sequential([
    layers.experimental.preprocessing.Rescaling(1./255, input_shape=(img_height, img_width, 3)),
    
    
    #设置二维卷积层1, 设置16个3*3卷积核,激活函数设置为ReLU,input_shape参数将图层的输入形状设置为(32, 32, 3)
    layers.Conv2D(16, (3, 3), activation='relu', input_shape = (img_height, img_width, 3)),
    #池化层1, 2*2采样
    layers.AveragePooling2D((2, 2)),
    #设置二维卷积层2,32个3*3卷积核
    layers.Conv2D(32, (3, 3), activation='relu'),
    #池化层2, 2*2采样
    layers.AveragePooling2D((2, 2)),
    layers.Dropout(0.3),
    layers.Conv2D(64, (3, 3), activation = 'relu'), #卷积层3,卷积核3*3
    layers.Dropout(0.3), #让神经元以一定的概率停止工作,防止过拟合,提高模型的泛化能力
    
    layers.Flatten(), #Flatten层,连接卷积层与全连接层
    layers.Dense(128, activation='relu'), #全连接层,特征进一步提取,128位输出空间的维数
    layers.Dense(len(class_names)) #输出层,输出预期结果
])

model.summary()

Model: “sequential”


Layer (type) Output Shape Param #

rescaling_1 (Rescaling) (None, 224, 224, 3) 0

conv2d_3 (Conv2D) (None, 222, 222, 16) 448

average_pooling2d_2 (Averag (None, 111, 111, 16) 0
ePooling2D)

conv2d_4 (Conv2D) (None, 109, 109, 32) 4640

average_pooling2d_3 (Averag (None, 54, 54, 32) 0
ePooling2D)

dropout_2 (Dropout) (None, 54, 54, 32) 0

conv2d_5 (Conv2D) (None, 52, 52, 64) 18496

dropout_3 (Dropout) (None, 52, 52, 64) 0

flatten_1 (Flatten) (None, 173056) 0

dense_1 (Dense) (None, 128) 22151296

dense_2 (Dense) (None, 2) 258

=================================================================
Total params: 22,175,138
Trainable params: 22,175,138
Non-trainable params: 0


四、训练模型

1. 设置动态学习率

#设置初始学习率
initial_learning_rate = 0.1

lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(
    initial_learning_rate,
    decay_steps=10, #这里是指steps,不是指epochs
    decay_rate=0.92, #lr经过一次衰减会变成decay_rate*lr
    staircase=True
)

#将指数衰减学习率送入优化器
optimizer = tf.keras.optimizers.Adam(learning_rate=lr_schedule)

model.compile(
    #设置Adam优化器
    optimizer = optimizer,
    #设置损失函数为交叉熵损失交叉熵函数
    #from_logits为True时,会将y_pred转换为概率(用softmax),否则不进行转换,通常True结果更稳定
    loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
    metrics=['accuracy']
)

2. 早停与保存最佳模型参数

from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping

epochs = 50

#保存最佳模型参数
checkpointer = ModelCheckpoint(
    "best_model.h5",
    monitor = 'val_accuracy',
    verbose = 1,
    save_best_only = True,
    save_weights_only = True
)

#设置早停
earlystopper = EarlyStopping(
    monitor='val_accuracy',
    min_delta=0.001,
    patience=20,
    verbose=1
)

3. 模型训练

#模型训练
history = model.fit(
    train_ds,
    validation_data=val_ds,
    epochs=epochs,
    callbacks = [checkpointer,earlystopper]
)
Epoch 1/50
16/16 [==============================] - ETA: 0s - loss: 0.6932 - accuracy: 0.5000
Epoch 1: val_accuracy did not improve from 0.50000
16/16 [==============================] - 8s 492ms/step - loss: 0.6932 - accuracy: 0.5000 - val_loss: 0.6932 - val_accuracy: 0.5000
Epoch 2/50
16/16 [==============================] - ETA: 0s - loss: 0.6933 - accuracy: 0.5000
Epoch 2: val_accuracy did not improve from 0.50000
16/16 [==============================] - 8s 511ms/step - loss: 0.6933 - accuracy: 0.5000 - val_loss: 0.6931 - val_accuracy: 0.5000
Epoch 3/50
16/16 [==============================] - ETA: 0s - loss: 0.6932 - accuracy: 0.5000
Epoch 3: val_accuracy did not improve from 0.50000
16/16 [==============================] - 7s 466ms/step - loss: 0.6932 - accuracy: 0.5000 - val_loss: 0.6931 - val_accuracy: 0.5000
Epoch 4/50
16/16 [==============================] - ETA: 0s - loss: 0.6932 - accuracy: 0.5000
Epoch 4: val_accuracy did not improve from 0.50000
16/16 [==============================] - 8s 469ms/step - loss: 0.6932 - accuracy: 0.5000 - val_loss: 0.6931 - val_accuracy: 0.5000
Epoch 5/50
16/16 [==============================] - ETA: 0s - loss: 0.6932 - accuracy: 0.5000
Epoch 5: val_accuracy did not improve from 0.50000
16/16 [==============================] - 8s 496ms/step - loss: 0.6932 - accuracy: 0.5000 - val_loss: 0.6932 - val_accuracy: 0.5000
Epoch 6/50
16/16 [==============================] - ETA: 0s - loss: 0.6932 - accuracy: 0.5000
Epoch 6: val_accuracy did not improve from 0.50000
16/16 [==============================] - 8s 474ms/step - loss: 0.6932 - accuracy: 0.5000 - val_loss: 0.6931 - val_accuracy: 0.5000
Epoch 7/50
16/16 [==============================] - ETA: 0s - loss: 0.6932 - accuracy: 0.5000
Epoch 7: val_accuracy did not improve from 0.50000
16/16 [==============================] - 8s 498ms/step - loss: 0.6932 - accuracy: 0.5000 - val_loss: 0.6931 - val_accuracy: 0.5000
Epoch 8/50
16/16 [==============================] - ETA: 0s - loss: 0.6932 - accuracy: 0.4681
Epoch 8: val_accuracy did not improve from 0.50000
16/16 [==============================] - 8s 488ms/step - loss: 0.6932 - accuracy: 0.4681 - val_loss: 0.6931 - val_accuracy: 0.5000
Epoch 9/50
16/16 [==============================] - ETA: 0s - loss: 0.6932 - accuracy: 0.5000
Epoch 9: val_accuracy did not improve from 0.50000
16/16 [==============================] - 8s 489ms/step - loss: 0.6932 - accuracy: 0.5000 - val_loss: 0.6931 - val_accuracy: 0.5000
Epoch 10/50
16/16 [==============================] - ETA: 0s - loss: 0.6932 - accuracy: 0.5000
Epoch 10: val_accuracy did not improve from 0.50000
16/16 [==============================] - 8s 502ms/step - loss: 0.6932 - accuracy: 0.5000 - val_loss: 0.6932 - val_accuracy: 0.5000
Epoch 11/50
16/16 [==============================] - ETA: 0s - loss: 0.6932 - accuracy: 0.5000
Epoch 11: val_accuracy did not improve from 0.50000
16/16 [==============================] - 8s 497ms/step - loss: 0.6932 - accuracy: 0.5000 - val_loss: 0.6932 - val_accuracy: 0.5000
Epoch 12/50
16/16 [==============================] - ETA: 0s - loss: 0.6932 - accuracy: 0.5000
Epoch 12: val_accuracy did not improve from 0.50000
16/16 [==============================] - 8s 473ms/step - loss: 0.6932 - accuracy: 0.5000 - val_loss: 0.6932 - val_accuracy: 0.5000
Epoch 13/50
16/16 [==============================] - ETA: 0s - loss: 0.6932 - accuracy: 0.5000
Epoch 13: val_accuracy did not improve from 0.50000
16/16 [==============================] - 8s 479ms/step - loss: 0.6932 - accuracy: 0.5000 - val_loss: 0.6931 - val_accuracy: 0.5000
Epoch 14/50
16/16 [==============================] - ETA: 0s - loss: 0.6932 - accuracy: 0.5000
Epoch 14: val_accuracy did not improve from 0.50000
16/16 [==============================] - 8s 481ms/step - loss: 0.6932 - accuracy: 0.5000 - val_loss: 0.6932 - val_accuracy: 0.5000
Epoch 15/50
16/16 [==============================] - ETA: 0s - loss: 0.6932 - accuracy: 0.5000
Epoch 15: val_accuracy did not improve from 0.50000
16/16 [==============================] - 8s 479ms/step - loss: 0.6932 - accuracy: 0.5000 - val_loss: 0.6931 - val_accuracy: 0.5000
Epoch 16/50
16/16 [==============================] - ETA: 0s - loss: 0.6932 - accuracy: 0.5000
Epoch 16: val_accuracy did not improve from 0.50000
16/16 [==============================] - 8s 476ms/step - loss: 0.6932 - accuracy: 0.5000 - val_loss: 0.6931 - val_accuracy: 0.5000
Epoch 17/50
16/16 [==============================] - ETA: 0s - loss: 0.6932 - accuracy: 0.5000
Epoch 17: val_accuracy did not improve from 0.50000
16/16 [==============================] - 8s 474ms/step - loss: 0.6932 - accuracy: 0.5000 - val_loss: 0.6931 - val_accuracy: 0.5000
Epoch 18/50
16/16 [==============================] - ETA: 0s - loss: 0.6932 - accuracy: 0.5000
Epoch 18: val_accuracy did not improve from 0.50000
16/16 [==============================] - 8s 475ms/step - loss: 0.6932 - accuracy: 0.5000 - val_loss: 0.6931 - val_accuracy: 0.5000
Epoch 19/50
16/16 [==============================] - ETA: 0s - loss: 0.6932 - accuracy: 0.5000
Epoch 19: val_accuracy did not improve from 0.50000
16/16 [==============================] - 8s 477ms/step - loss: 0.6932 - accuracy: 0.5000 - val_loss: 0.6931 - val_accuracy: 0.5000
Epoch 20/50
16/16 [==============================] - ETA: 0s - loss: 0.6932 - accuracy: 0.5000
Epoch 20: val_accuracy did not improve from 0.50000
16/16 [==============================] - 8s 481ms/step - loss: 0.6932 - accuracy: 0.5000 - val_loss: 0.6931 - val_accuracy: 0.5000
Epoch 21/50
16/16 [==============================] - ETA: 0s - loss: 0.6932 - accuracy: 0.5000
Epoch 21: val_accuracy did not improve from 0.50000
16/16 [==============================] - 8s 483ms/step - loss: 0.6932 - accuracy: 0.5000 - val_loss: 0.6931 - val_accuracy: 0.5000
Epoch 21: early stopping

五、模型评估

1. Loss与Accuracy图

acc = history.history['accuracy']
val_acc = history.history['val_accuracy']

loss = history.history['loss']
val_loss = history.history['val_loss']

epochs_range = range(len(loss))

plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label = 'Training Accuracy')
plt.plot(epochs_range, val_acc, label = 'Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')

plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label = 'Training Loss')
plt.plot(epochs_range, val_loss, label = 'Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')

plt.show()

在这里插入图片描述

2. 指定图片进行预测

#加载效果最好的模型权重
model.load_weights('best_model.h5')
from PIL import Image
import numpy as np

img = Image.open('./data/test/nike/1.jpg') #这里选择你需要预测的图片
img = np.array(img) #TensorFlow's tf.image.resize() function is expecting a tensor as input, but you're providing it with a PIL Image object.
img = tf.convert_to_tensor(img, dtype=tf.float32)
image = tf.image.resize(img, [img_height, img_width])

img_array = tf.expand_dims(image, 0)

predictions = model.predict(img_array) #这里选用你已经训练好的模型
print('预测结果为:', class_names[np.argmax(predictions)])

预测结果为: nike

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值